Advancements in Reinforcement Learning with Sergey Levine
EPISODE 355
|
MARCH
9,
2020
Watch
Follow
Share
About this Episode
Today we're joined by Sergey Levine, an Assistant Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!
Sergey and his lab's recent efforts have been focused on contributing to a future where machines can be "out there in the real world, learning continuously through their own experience." Sergey shares how many of the papers presented at the most recent NeurIPS conference are working to make that happen. Some of the major developments have been in the research fields of model-free reinforcement learning, causality and imitation learning, and offline reinforcement learning.
About the Guest
Sergey Levine
UC Berkeley, Physical Intelligence
Resources
- Paper: Causal Confusion in Imitation Learning
- Paper: Wasserstein Dependency Measure for Representation Learning
- Paper: Planning with Goal-Conditioned Policies
- Paper: Search on the Replay Buffer: Bridging Planning and Reinforcement Learning
- Paper: MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies
- Paper: Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
- Paper: Unsupervised Curricula for Visual Meta-Reinforcement Learning
- Paper: Compositional Plan Vectors
- Paper: Meta-Learning with Implicit Gradients
- Paper: When to Trust Your Model: Model-Based Policy Optimization
- Paper: Guided Meta-Policy Search
- TWIML Presents: NeurIPS 2019
- RoboNet
- OpenAI Gym
