Is It Time to Rethink LLM Pre-Training? with Aditi Raghunathan
EPISODE 747
|
SEPTEMBER
16,
2025
Watch
Follow
Share
About this Episode
Today, we're joined by Aditi Raghunathan, assistant professor at Carnegie Mellon University, to discuss the limitations of LLMs and how we can build more adaptable and creative models. We dig into her ICML 2025 Outstanding Paper Award winner, “Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction,” which examines why LLMs struggle with generating truly novel ideas. We dig into the "Roll the dice" approach, which encourages structured exploration by injecting randomness at the start of generation, and the "Look before you leap" concept, which trains models to take "leaps of thought" using alternative objectives to create more diverse and structured outputs. We also discuss Aditi’s papers exploring the counterintuitive phenomenon of "catastrophic overtraining," where training models on more data improves benchmark performance but degrades their ability to be fine-tuned for new tasks, and dig into her lab's work on creating more controllable and reliable models, including the concept of "memorization sinks," an architectural approach to isolate and enable the targeted unlearning of specific information.
About the Guest
Aditi Raghunathan
Carnegie Mellon University
Resources
- Aditi Raghunathan’s Group @ ICML 2025
- Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction
- Overtrained Language Models Are Harder to Fine-Tune
- Memorization Sinks: Isolating Memorization during LLM Training
- Beyond benchmarks: the case for spherical cows in LLM research (Invited talk at MOSS workshop)
- When “good” data backfires: questioning common wisdom in data curation (Invited talk at DataWorld workshop)
- Scaling Laws for Precision
- Weight Ensembling Improves Reasoning in Language Models
- ICML 2025 Panel: AI Safety Social
- OLMo: Accelerating the Science of Language Models
- Llama 2: Open Foundation and Fine-Tuned Chat Models
- The Llama 3 Herd of Models
- Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
- KernelBench: Can LLMs Write Efficient GPU Kernels?
- Circuit Tracing: Revealing Computational Graphs in Language Models
- Exploring the “Biology” of LLMs with Circuit Tracing with Emmanuel Ameisen - #727
