Rethinking Pre-Training for Agentic AI with Aakanksha Chowdhery
EPISODE 759
|
DECEMBER
17,
2025
Watch
Follow
Share
About this Episode
Today, we're joined by Aakanksha Chowdhery, member of technical staff at Reflection, to explore the fundamental shifts required to build true agentic AI. While the industry has largely focused on post-training techniques to improve reasoning, Aakanksha draws on her experience leading pre-training efforts for Google’s PaLM and early Gemini models to argue that pre-training itself must be rethought to move beyond static benchmarks. We explore the limitations of next-token prediction for multi-step workflows and examine how attention mechanisms, loss objectives, and training data must evolve to support long-form reasoning and planning. Aakanksha shares insights on the difference between context retrieval and actual reasoning, the importance of "trajectory" training data, and why scaling remains essential for discovering emergent agentic capabilities like error recovery and dynamic tool learning.
About the Guest
Aakanksha Chowdhery
Reflection
Resources
- Reflection AI
- Reflection AI, an A.I. Model Start-Up, Raises $2 Billion
- Gemini: A Family of Highly Capable Multimodal Models
- Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
- PaLM: Scaling Language Modeling with Pathways
- Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries
- Measuring AI Ability to Complete Long Tasks
- Terminal-Bench
- SWE-bench
- Training Verifiers to Solve Math Word Problems
