Intelligent content that gives practitioners, innovators and leaders an inside look at the present and future of ML & AI technologies.

LATEST
Play Video
EPISODE 761  |  
January 29, 2026
Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We discuss how high-quality, diverse data plays a central role in closing the intelligence gap between small and large models, and how combining synthetic data generation, imitation learning, and reinforcement learning can unlock stronger reasoning capabilities in smaller models. Yejin explains the risks of homogeneity in model outputs and mode collapse highlighted in her “Artificial Hivemind” paper, and its impacts on human creativity and knowledge. We also discuss her team's novel approaches, including reinforcement learning as a pre-training objective, where models are incentivized to “think” before predicting the next token, and "Prismatic Synthesis," a gradient-based method for generating diverse synthetic math data while filtering overrepresented examples. Additionally, we cover the societal implications of AI and the concept of pluralistic alignment—ensuring AI reflects the diverse norms and values of humanity. Finally, Yejin shares her mission to democratize AI beyond large organizations and offers her predictions for the coming year.
RECENT
twiml-nikita-rudin-intelligent-robots-2026_-are-we-there-yet_-sq
EPISODE 760  |  
January 8, 2026
twiml-aakanksha-chowdhery-rethinking-pre-training-agentic-ai-sq
EPISODE 759  |  
December 17, 2025
twiml-munawar-hayat-why-vision-language-models-ignore-what-they-see-sq
EPISODE 758  |  
December 9, 2025

INSIGHTS

LATEST REPORT

Retrieval-augmented generation promised to bring ChatGPT’s magic to enterprise data. But while organizations rushed to build chatbots, they often struggled to deliver real business value. This comprehensive guide reveals RAG’s full potential beyond conversational interfaces.

Community

The TWIML Community is a global network of machine learning, deep learning and AI practitioners and enthusiasts.

We organize ongoing educational programs including study groups for several popular ML/AI courses such as Fast.ai Deep Learning, Machine learning and NLP, Stanford CS224N, Deeplearning.ai and more. We also host several special interest groups focused on topics like Swift for Tensorflow, and competing in Kaggle competitions.

TWIML Community

Work with Us

TWIML creates and curates intelligent content that helps makers build better experiences for their users, and gives executives an inside look at the real-world application of intelligence technologies. We also build and support communities of innovators who are as excited about these technologies as we are. We advise a variety of leading organizations as well, helping to craft strategies for taking advantage of the vast opportunities created by ML and AI.