Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals

EPISODE 546
WATCH
Play Video

Join our list for notifications and early access to events

About this Episode

Today we’re excited to kick off our annual NeurIPS, joined by Oriol Vinyals, the lead of the deep learning team at Deepmind. We cover a lot of ground in our conversation with Oriol, beginning with a look at his research agenda and why the scope has remained wide even through the maturity of the field, his thoughts on transformer models and if they will get us beyond the current state of DL, or if some other model architecture would be more advantageous. We also touch on his thoughts on the large language models craze, before jumping into his recent paper StarCraft II Unplugged: Large Scale Offline Reinforcement Learning, a follow up to their popular AlphaStar work from a few years ago. Finally, we discuss the degree to which the work that Deepmind and others are doing around games actually translates into real-world, non-game scenarios, recent work on multimodal few-shot learning, and we close with a discussion of the consequences of the level of scale that we’ve achieved thus far.
Connect with Oriol
Read More

Thanks to our sponsor SigOpt

SigOpt was born out of the desire to make experts more efficient. While co-founder Scott Clark was completing his PhD at Cornell he noticed that often the final stage of research was a domain expert tweaking what they had built via trial and error. After completing his PhD, Scott developed MOE to solve this problem, and used it to optimize machine learning models and A/B tests at Yelp. SigOpt was founded in 2014 to bring this technology to every expert in every field.

SigOpt Logo

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *