Reinforcement Learning for Industrial AI with Pieter Abbeel
EPISODE 476
|
APRIL
19,
2021
Watch
Follow
Share
About this Episode
Today we're joined by Pieter Abbeel, a Professor at UC Berkeley, co-Director of the Berkeley AI Research Lab (BAIR), as well as Co-founder and Chief Scientist at Covariant.
In our conversation with Pieter, we cover a ton of ground, starting with the specific goals and tasks of his work at Covariant, the shift in needs for industrial AI application and robots, if his experience solving real-world problems has changed his opinion on end to end deep learning, and the scope for the three problem domains of the models he's building.
We also explore his recent work at the intersection of unsupervised and reinforcement learning, goal-directed RL, his recent paper "Pretrained Transformers as Universal Computation Engines" and where that research thread is headed, and of course, his new podcast Robot Brains, which you can find on all streaming platforms today!
About the Guest
Pieter Abbeel
OpenAI; Gradescope
Resources
- Reinforcement Learning Deep Dive with Pieter Abbeel - #28
- Robotic Perception and Control with Chelsea Finn - #29
- Deep Robotic Learning with Sergey Levine - #37
- Trends in Reinforcement Learning with Chelsea Finn - #335
- Advances in Reinforcement Learning with Sergey Levine - #355
- Applying RL to Real-World Robotics with Abhishek Gupta - #466
- Series: TWIML Presents: Industrial AI
- CLIP: Connecting Text and Images
- The Robot Brains Podcast
- Berkeley Artificial Intelligence Research Lab
- Covariant: AI Robotics for the Real World
- UC Berkeley Robot Learning Lab
- A Simple Framework for Contrastive Learning of Visual Representations
- Momentum Contrast for Unsupervised Visual Representation Learning
- CURL: Contrastive Unsupervised Representations for Reinforcement Learning
- Pretrained Transformers as Universal Computation Engines

