The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha
EPISODE 535
|
NOVEMBER
11,
2021
Watch
Follow
Share
About this Episode
Today we’re joined by David Ha, a research scientist at Google.
In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models has become, and quite a few papers including Neuroevolution of self-interpretable agents, World Models and Attention for Reinforcement Learning, and The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning.
This interview is Nerd Alert certified, so get your notes ready!
PS. David is one of our favorite follows on Twitter (@hardmaru), so check him out and share your thoughts on this interview and his work!
About the Guest
David Ha
Google AI
Resources
- Google Scholar
- Blog: Using Selective Attention in Reinforcement Learning Agents
- Blog: Exploring Weight Agnostic Neural Networks
- Blog: Generating Large Images from Latent Vectors
- Blog: Teaching Machines to Draw
- Paper: Weight Agnostic Neural Networks
- Paper: World Models
- Paper: Neuroevolution of Self-Interpretable Agents
- Video: selective attention test
- Paper: Permutation-Invariant Neural Networks for Reinforcement Learning
- Article: The backwards brain bicycle: un-doing understanding
- #94 - Neuroevolution: Evolving Novel Neural Network Architectures w/ Kenneth Stanley
- #119 - Adversarial Attacks Against Reinforcement Learning Agents w/ Ian Goodfellow and Sandy Huang
- Quickdraw
- Introducing PlaNet: A Deep Planning Network for Reinforcement Learning
- Sketch-RNN - Paper: A Neural Representation of Sketch Drawings
- Paper: Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
- Paper: Simple random search provides a competitive approach to reinforcement learning
- Paper: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks
- Paul Bach-y-Rita
- DQN