Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm.
Subscribe: iTunes / Google Play / Spotify / RSS
As we’ve explored in our conversations with both Gary Brotman and Max Welling, Qualcomm has a hand in tons of machine learning research and hardware, and our conversation with Jeff is no different. We discuss how the various training frameworks fit into the developer experience when working with their chipsets, examples of federated learning in the wild, the role inference will play in data center devices and more.
Thanks to our Sponsor
I’d like to send a huge thanks to Qualcomm for their support of the podcast and for sponsoring today’s episode. As you’ll hear in my conversation with Jeff, Qualcomm is taking a systems approach to helping the industry address the challenges associated with AI on mobile devices and at the edge. In support of their Snapdragon chipset family, which powers some of the latest and greatest Android devices, Qualcomm provides their own suite of software tools and is also actively supporting a variety of partner and industry projects including the Android Neural Network APIs, TensorFlow Lite, the tinyML initiative, and the Open Neural Network Exchange, or ONNX, ecosystem.
To learn more about what Qualcomm is up to, including their AI research, platforms, and developer tools, visit twimlai.com/qualcomm.
Many of you are aware that we’ve been hosting a couple of paper-reading meetups in conjunction with the podcast. We’ll I’m excited to share that Matt Kenney, Duke staff researcher and long-time listener and friend of the show, has stepped up to help take this group to the next level. The paper reading meetup will now be meeting every other Sunday at 1 PM Eastern Time to dissect the latest and greatest academic research papers in ML and AI. If you want to take your understanding of the field to the next level, please join us this Sunday, July 14th, or check twimlai.com/meetup for more upcoming community events.
We’ve also got a couple of study groups currently running, with one group working through the fast.ai Deep Learning from the Foundations course, and another working through the Stanford cs224n Deep Learning for Natural Language Processing course. These study groups just started and will be working on these courses through October and November respectively, so it’s not too late to join. Sign up on the meetup page at twimlai.com/meetup.
From the Interview
- DARPA SyNAPSE
- Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting (Computational Neuroscience Series, Find on Amazon
- TinyML Summit 2019
- ONNX – Github
- Download our AI Platforms eBook Series!
- For more series like this one, visit the TWiML Presents: page!
- Join the Meetup
- Register for the TWiML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0