MOReL: Model-Based Offline Reinforcement Learning with Aravind Rajeswaran

EPISODE 442
|
DECEMBER 28, 2020
Watch
Play
Don't Miss an Episode!  Join our mailing list for episode summaries and other updates.

About this Episode

Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington. At NeurIPS, Aravind presented his paper "MOReL: Model-Based Offline Reinforcement Learning." In our conversation, we explore model-based reinforcement learning and if models are a "prerequisite" to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they're seeing from this research.

About the Guest

Aravind Rajeswaran

Facebook AI Research (FAIR)

Connect with Aravind

Resources

Related Topics