AI Rewind 2018: Trends in Reinforcement Learning with Simon Osindero
EPISODE 217
|
JANUARY
3,
2019
Watch
Follow
Share
About this Episode
In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind.
We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We've packed a bunch into this show, as Simon walks us through many of the important papers and developments seen last year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more.
About the Guest
Simon Osindero
Deepmind
Resources
- Paper: Observe and Look Further: Achieving Consistent Performance on Atari
- Paper: Playing hard exploration games by watching YouTube
- Paper: Kickstarting Deep Reinforcement Learning
- Paper: Mix&Match: Agent Curricula for Reinforcement Learning
- Paper: Unsupervised Control Through Non-Parametric Discriminative Rewards
- Paper: Meta-Reinforcement Learning of Structured Exploration Strategies
- Paper: Meta-Gradient Reinforcement Learning
- Paper: Evolved Policy Gradients
- Paper: Randomized Prior Functions for Deep Reinforcement Learning
- Paper: Exploration By Random Network Distillation
- Paper: Implicit Quantile Networks For Distributional Reinforcement Learning
- Paper: Sample Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion (STEVE)
- Paper: IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
- Paper: Distributed Distributional Deterministic Policy Gradients
- Paper: Distributed Prioritized Experience Replay
- Lucid
- Dopamine
- RL Model Zoo

