Accelerating Deep Learning with Mixed Precision Arithmetic with Greg Diamos

    800 800 This Week in Machine Learning & AI

    In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The Next Generation of AI Chips.”

    Greg’s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you’re interested in systems level thinking around scaling and accelerating deep learning, you’re really going to like this one. And of course, if you like this one, you’re also going to like TWiML Talk #14 with Greg’s former colleague, Shubho Sengupta, which covers a bunch of related topics.

    This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.

    Giveaway Update!

    Thanks to everyone who took the time to enter our #TWiML1MIL listener giveaway! We sent out an email to entrants a few days ago, so please be on the lookout for that. If you haven’t heard from us yet, please reach out to us at team@twimlai.com so that we can get you your swag!

    About Gregory

    Mentioned in the Interview

    Leave a Reply

    Your email address will not be published.