Posts Tagged :


Designing Computer Systems for Software with Kunle Olukotun
800 800 This Week in Machine Learning & AI

Today we’re joined by Kunle Olukotun, Professor in the department of Electrical Engineering and Computer Science at Stanford University, and Chief Technologist at Sambanova Systems. … Read the rest

read more
Accelerating Deep Learning with Mixed Precision Arithmetic with Greg Diamos
800 800 This Week in Machine Learning & AI

In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The…

read more
Block-Sparse Kernels for Deep Neural Networks with Durk Kingma
800 800 This Week in Machine Learning & AI

The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus…

read more
Intel Buys Nervana Systems to Break NVIDIA’s Hold on Deep Learning Hardware
600 360 This Week in Machine Learning & AI
On the heels of last week’s $200 million acquisition by Apple of Turi acquisition, Intel announced on Tuesday yet another acquisition in the machine learning and AI space, this time with the $400 million acquisition of deep learning cloud startup more
Another Huge ML Acquisition, AI in the Olympics + Win a Free Ticket to the O’Reilly AI Conference—TWiML 2016/08/12
620 465 This Week in Machine Learning & AI
This week we discuss Intel’s latest deep learning acquisition, AI in the Olympics, image completion with deep learning in TensorFlow, and how you can win a free ticket to the O’Reilly AI Conference in New York City, plus a bunch more
New TITAN X Benchmark, Plus What to Do When You Need More GPUs
1024 577 This Week in Machine Learning & AI
Researchers on the MXNet team were among the lucky folks that have their hands on the GPU at this point and they published an initial benchmark this week following the deepmark deep learning benchmarking protocol. Plus a "deep learning supercomputer" and Azure more
Machine Learning for Datacenter Optimization, a ‘Crazy’ New GPU from NVIDIA & Faster RNN Training–TWiML 2016/07/22
991 523 This Week in Machine Learning & AI

This week’s show covers Google’s use of machine learning to cut datacenter power consumption, NVIDIA’s new ‘crazy, reckless’ GPU, and a new Layer Normalization technique that promises to reduce the…

read more