This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWIML Talk guests. This time around I’m joined by Joan Bruna, Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, and Michael Bronstein, associate professor at Università della Svizzera italiana (Switzerland) and Tel Aviv University.
Subscribe: iTunes / Google Play / Spotify / RSS
Joan and Michael join me after their tutorial on Geometric Deep Learning on Graphs and Manifolds. In our conversation we dig pretty deeply into the ideas behind geometric deep learning and how we can use it in applications like 3D vision, sensor networks, drug design, biomedicine, and recommendation systems. This is definitely a Nerd Alert show, and one that will get your multi-dimensional neurons firing. Enjoy!
Join the Giveaway!
Recently we hit a very exciting milestone for the podcast: One Million Listens!!! We’d hate to miss an opportunity to show you some love, so we’re holding another listener appreciation contest to celebrate the occasion. Tweet to us @twimlai using #TWIML1MIL to enter. Every entry gets a fly #TWIML1MIL sticker plus a chance to win one of 10 limited edition t-shirts commemorating the occasion. We’ll be giving away some other mystery prizes from the magic TWIML swag bag along the way, so you should definitely enter. If you’re not on twitter, or want more ways to enter, just look below for more chances to win!!!
Thanks to our Sponsor
I’d like to thank our friends over at Intel Nervana for their sponsorship of this podcast and our NIPS series. While Intel was very active at NIPS, with a bunch of workshops, demonstrations and poster sessions, their big news at NIPS was the first public viewing of the Intel Nervana™ Neural Network Processor, or NNP. The goal of the NNP architecture is to provide the flexibility needed to support deep learning primitives while making the core hardware components as efficient as possible, giving neural network designers powerful tools for solving larger and more difficult problems while minimizing data movement and maximizing data re-use. To learn more about Intel’s AI Products Group and the Intel Nervana NNP, visit IntelNervana.com.