Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling

800 800 This Week in Machine Learning & AI

Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, as well as VP of technologies at Qualcomm, and Fellow at the Canadian Institute for Advanced Research, or CIFAR.

In our conversation, we discuss Max’s research at Qualcomm AI Research and the university, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, and in power efficiency for AI via compression, quantization, and compilation. We also discuss Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and compute.

Thanks to our Sponsor


Thanks to our friends at Qualcomm for sponsoring today’s show! As you’ll hear in the conversation with Max, Qualcomm has been actively involved in AI research for well over a decade, leading to advances in power-efficient on-device AI as well as in algorithms such as Bayesian deep learning, graph CNNs, guage equivariant CNNs and more. Of course, Qualcomm AI powers some of the latest and greatest Android devices with their Snapdragon chipset family. From this strong foundation in the mobile chipset space, Qualcomm now has the goal of scaling AI across devices and making AI ubiquitous. In this vein, a product I’m particularly interested in seeing is their Cloud AI 100 line of data center inference chips, which I learned about at their recent press launch.

To learn more about what Qualcomm is up to, including their AI research, platforms, and developer tools, visit twimlai.com/qualcomm.

Join our Study Group!

For the next four weeks, starting May 25th, we’ll be diving head first into Pieter Abbeel’s Full Stack Deep Learning course. This course is a great complement to the fast.ai courses we’ve done so far and covers practical topics like problem formulation, data acquisition and preparation, establishing the right frameworks, platforms and compute infrastructure, debugging and ensuring reproducibility, and deploying and scaling your models.

For more information, or to register for this study group, visit twimlai.com/fullstack.

We can’t thank our volunteer study group hosts enough for all their hard work. A huge shout out to Michael, Christian, Kai, Sanyam, Joseph, Dinesh and everyone else that’s been involved in making this group happen.

About Max

From the Interview

“More On That Later” by Lee Rosevere licensed under CC By 4.0

1 comment
  • Stephen G Odaibo
    REPLY

    Is explicit engineering of coordinate systems and transformations a throw back to the explicit feature engineering era? And is that era fundamentally inferior to the deep learning era in terms of learning performance? So far it certainly seems so to me in some respects.

    One wonders if the deep learning realization represents something deeply fundamental to learning, such that more data and more compute power is indeed all we need. Deep models are so far capable of learning far more than we can explicitly engineer. Just like when you turn your head, you neither need to nor are capable of explicitly thinking about and calculating all the transformations required to keep objects in your field of view constant. Similarly, deep learning models may not ever need to be explicitly instructed about invariances or equivariances. Currently they learn these implicitly and automatically, just as we humans do.

    Or is learning dividable into more primitive features (such as coordinate transformations) that deep models will be more efficient at learning vs more abstract high level features that may need to be engineered by rules.

    By the way, brilliant work trajectory overall by Max Welling! Same Max Welling of Kingma & Welling (2013) Autoencoding Variational Bayes, which didn’t explicitly come up during the interview … an excellent interview which as noted could have gone on for a few more hours.

Leave a Reply

Your email address will not be published.