100x Improvements in Deep Learning Performance with Sparsity with Subutai Ahmad
EPISODE 562
|
MARCH
7,
2022
Watch
Follow
Share
About this Episode
Today we’re joined by Subutai Ahmad, VP of research at Numenta. While we’ve had numerous conversations about the biological inspirations of deep learning models with folks working at the intersection of deep learning and neuroscience, we dig into uncharted territory with Subutai. We set the stage by digging into some of fundamental ideas behind Numenta’s research and the present landscape of neuroscience, before exploring our first big topic of the podcast: the cortical column. Cortical columns are a group of neurons in the cortex of the brain which have nearly identical receptive fields; we discuss the behavior of these columns, why they’re a structure worth mimicing computationally, how far along we are in understanding the cortical column, and how these columns relate to neurons.
We also discuss what it means for a model to have inherent 3d understanding and for computational models to be inherently sensory motor, and where we are with these lines of research. Finally, we dig into our other big idea, sparsity. We explore the fundamental ideals of sparsity and the differences between sparse and dense networks, and applying sparsity and optimization to drive greater efficiency in current deep learning networks, including transformers and other large language models.
About the Guest
Subutai Ahmad
Numenta
Thanks to our sponsor SigOpt
SigOpt was born out of the desire to make experts more efficient. While co-founder Scott Clark was completing his PhD at Cornell he noticed that often the final stage of research was a domain expert tweaking what they had built via trial and error. After completing his PhD, Scott developed MOE to solve this problem, and used it to optimize machine learning models and A/B tests at Yelp. SigOpt was founded in 2014 to bring this technology to every expert in every field.
Resources
- Unsupervised real-time anomaly detection for streaming data
- Article: Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
- Machine Intelligence Research (Current Research Projects)
- Paper: Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
- Blog: Sparsity Without Sacrifice: Accurate BERT with 10x Fewer Parameters
- Paper: Dynamic Routing Between Capsules
