Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, as well as VP of technologies at Qualcomm, and Fellow at the Canadian Institute for Advanced Research, or CIFAR.
Subscribe: iTunes / Google Play / Spotify / RSS
In our conversation, we discuss Max’s research at Qualcomm AI Research and the university, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, and in power efficiency for AI via compression, quantization, and compilation. We also discuss Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and compute.
Thanks to our Sponsor
Thanks to our friends at Qualcomm for sponsoring today’s show! As you’ll hear in the conversation with Max, Qualcomm has been actively involved in AI research for well over a decade, leading to advances in power-efficient on-device AI as well as in algorithms such as Bayesian deep learning, graph CNNs, guage equivariant CNNs and more. Of course, Qualcomm AI powers some of the latest and greatest Android devices with their Snapdragon chipset family. From this strong foundation in the mobile chipset space, Qualcomm now has the goal of scaling AI across devices and making AI ubiquitous. In this vein, a product I’m particularly interested in seeing is their Cloud AI 100 line of data center inference chips, which I learned about at their recent press launch.
To learn more about what Qualcomm is up to, including their AI research, platforms, and developer tools, visit twimlai.com/qualcomm.
Join our Study Group!
For the next four weeks, starting May 25th, we’ll be diving head first into Pieter Abbeel’s Full Stack Deep Learning course. This course is a great complement to the fast.ai courses we’ve done so far and covers practical topics like problem formulation, data acquisition and preparation, establishing the right frameworks, platforms and compute infrastructure, debugging and ensuring reproducibility, and deploying and scaling your models.
For more information, or to register for this study group, visit twimlai.com/fullstack.
We can’t thank our volunteer study group hosts enough for all their hard work. A huge shout out to Michael, Christian, Kai, Sanyam, Joseph, Dinesh and everyone else that’s been involved in making this group happen.
From the Interview
- Qualcomm AI Research
- Article: “Do we still need models or just more data and compute?”
- For more series like this one, visit the TWIML Presents: page!
- Join the Meetup
- Register for the TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0