Localizing and Editing Knowledge in LLMs with Peter Hase
EPISODE 679
|
APRIL
8,
2024
Watch
Follow
Share
About this Episode
Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.
About the Guest
Peter Hase
University of North Carolina
Resources
- Paper: The Unreasonable Effectiveness of Easy Training Data for Hard Tasks
- Paper: Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
- Paper: Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks
- Paper: Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models
- Studying Machine Intelligence with Been Kim - #571

