Today we’re joined by Sergey Levine, an Assistant Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers — which means a lot of ground to cover!
Subscribe: iTunes / Google Play / Spotify / RSS
Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” Sergey shares how many of the papers presented at the most recent NeurIPS conference are working to make that happen. Some of the major developments have been in the research fields of model-free reinforcement learning, causality and imitation learning, and offline reinforcement learning.
Connect with Sergey!
- Paper: Causal Confusion in Imitation Learning
- Paper: Wasserstein Dependency Measure for Representation Learning
- Paper: Planning with Goal-Conditioned Policies
- Paper: Search on the Replay Buffer: Bridging Planning and Reinforcement Learning
- Paper: MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies
- Paper: Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
- Paper: Unsupervised Curricula for Visual Meta-Reinforcement Learning
- Paper: Compositional Plan Vectors
- Paper: Meta-Learning with Implicit Gradients
- Paper: When to Trust Your Model: Model-Based Policy Optimization
- Paper: Guided Meta-Policy Search
- TWIML Presents: NeurIPS 2019
- OpenAI Gym
- Join the TWIML Community!
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0