In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind.
Subscribe: iTunes / Google Play / Spotify / RSS
We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen last year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more.
Mentioned in the Interview
- Paper: Observe and Look Further: Achieving Consistent Performance on Atari
- Paper: Playing hard exploration games by watching YouTube
- Paper: Kickstarting Deep Reinforcement Learning
- Paper: Mix&Match: Agent Curricula for Reinforcement Learning
- Paper: Unsupervised Control Through Non-Parametric Discriminative Rewards
- Paper: Meta-Reinforcement Learning of Structured Exploration Strategies
- Paper: Meta-Gradient Reinforcement Learning
- Paper: Evolved Policy Gradients
- Paper: Randomized Prior Functions for Deep Reinforcement Learning
- Paper: Exploration By Random Network Distillation
- Paper: Implicit Quantile Networks For Distributional Reinforcement Learning
- Paper: Sample Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion (STEVE)
- Paper: IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
- Paper: Distributed Distributional Deterministic Policy Gradients
- Paper: Distributed Prioritized Experience Replay
- RL Model Zoo
- Check out all of our great series from 2018 at the TWIML Presents: Series page!
- TWIML Online Meetup
- Register for the TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0