From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy
EPISODE 731
|
MAY
13,
2025
Watch
Follow
Share
About this Episode
Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they’ve used, and Bespoke Labs’ open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA.
About the Guest
Mahesh Sathiamoorthy
Bespoke Labs
Resources
- Improving Multi-Turn Tool Use with Reinforcement Learning
- Bespoke Curator
- Bespoke-Minicheck
- MiniChart Playground
- Bespoke-Stratos-32B
- Scaling up Open Reasoning with OpenThinker-32B
- Open Thoughts
- Berkeley Function-Calling Leaderboard (BFCL)
- Welcome to the Era of Experience
- verl: Volcano Engine Reinforcement Learning for LLMs
- The Crux: How Leaders Become Strategists
- Data-Centric AI: A Data-Driven Machine Learning Approach
- The Bitter Lesson
- DeepSeek-R1
- DeepSeek-R1-Distill-Qwen-32B
- DeepSeek-R1-Distill-Qwen-7B
- Introducing deep research
- OpenAI deep research
- How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730

