This week's AI Research Review explores the localization and editing of factual associations in GPT models. The authors demonstrate that individual facts within the model can be pinpointed and modified, contributing to a better understanding of knowledge representation in language models. The findings highlight that key-value pairs similar to MLP activations might hold factual information and suggest that modifying one specific down projection matrix can alter the predicted output. These discoveries could potentially allow for large-scale updating of facts within a model, rather than requiring retraining with new data sets. However, this research also raises more questions about the nature of facts within language models.