Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

About This Episode

Today we’re excited to kick off our annual NeurIPS, joined by Oriol Vinyals, the lead of the deep learning team at Deepmind. We cover a lot of ground in our conversation with Oriol, beginning with a look at his research agenda and why the scope has remained wide even through the maturity of the field, his thoughts on transformer models and if they will get us beyond the current state of DL, or if some other model architecture would be more advantageous. We also touch on his thoughts on the large language models craze, before jumping into his recent paper StarCraft II Unplugged: Large Scale Offline Reinforcement Learning, a follow up to their popular AlphaStar work from a few years ago. Finally, we discuss the degree to which the work that Deepmind and others are doing around games actually translates into real-world, non-game scenarios, recent work on multimodal few-shot learning, and we close with a discussion of the consequences of the level of scale that we’ve achieved thus far.

Watch on Youtube

Thanks to our Sponsor!

Today’s show is brought to you by our good friends at SigOpt. Building effective models is a scientific process that requires experimentation to get right. With SigOpt, modelers design novel experiments, explore modeling problems and optimize models to meet multiple objective metrics in their iterative workflow. Whether tracking your training runs or running at scale hyperparameter optimization jobs, SigOpt is designed to meet your needs. Learn why teams from PayPal, Two Sigma, OpenAI, Numenta, Accenture and many more rely on SigOpt by signing up to use SigOpt for free forever at sigopt.com/signup.

Connect with Oriol!

Resources

Join Forces!

Leave a Reply

Your email address will not be published.