The Evolution of Reasoning in Small Language Models with Yejin Choi
EPISODE 761
|
JANUARY
29,
2026
Watch
Follow
Share
About this Episode
Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We discuss how high-quality, diverse data plays a central role in closing the intelligence gap between small and large models, and how combining synthetic data generation, imitation learning, and reinforcement learning can unlock stronger reasoning capabilities in smaller models. Yejin explains the risks of homogeneity in model outputs and mode collapse highlighted in her “Artificial Hivemind” paper, and its impacts on human creativity and knowledge. We also discuss her team's novel approaches, including reinforcement learning as a pre-training objective, where models are incentivized to “think” before predicting the next token, and "Prismatic Synthesis," a gradient-based method for generating diverse synthetic math data while filtering overrepresented examples. Additionally, we cover the societal implications of AI and the concept of pluralistic alignment—ensuring AI reflects the diverse norms and values of humanity. Finally, Yejin shares her mission to democratize AI beyond large organizations and offers her predictions for the coming year.
About the Guest
Yejin Choi
Stanford University
Resources
- Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
- RLP: Reinforcement as a Pretraining Objective
- Prismatic Synthesis: Gradient-based Data Diversification Boosts Generalization in LLM Reasoning
- Spectrum Tuning: Post-Training for Distributional Coverage and In-Context Steerability
- DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
- Llama 3
- K-Means Clustering
- Social Commonsense Reasoning with Yejin Choi - #518
