Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
Subscribe: iTunes / Google Play / Spotify / RSS
At NeurIPS, Aravind presented his paper “MOReL: Model-Based Offline Reinforcement Learning.” In our conversation, we explore model-based reinforcement learning and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.
Thanks to our Sponsor!
I’d like to send a huge thank you to our friends at Qualcomm Technologies for their continued support of the podcast, and their sponsorship of this NeurIPS series! Qualcomm AI Research is dedicated to advancing AI to make its core capabilities — perception, reasoning, and action — ubiquitous across devices. Their work makes it possible for billions of users around the world to have AI-enhanced experiences on devices powered by Qualcomm Technologies. To learn more about what Qualcomm Technologies is up to on the research front, visit twimlai.com/qualcomm.
Connect with Aravind
- Paper: MOReL: Model-Based Offline Reinforcement Learning
- Relational, Object-Centric Agents for Completing Simulated Household Tasks with Wilka Carvalho – #402
- Advances in Reinforcement Learning with Sergey Levine – #355
- Trends in Reinforcement Learning with Chelsea Finn
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0