Today we’re joined by Kunle Olukotun, Professor in the department of Electrical Engineering and Computer Science at Stanford University, and Chief Technologist at Sambanova Systems.
Subscribe: iTunes / Google Play / Spotify / RSS
Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. We cover the limitations of the current hardware such as GPUs, and peer a bit into the future as well. This was a fun one!
Mentioned in the Interview
- Sambanova Systems
- Slides: Designing Computer Systems for Software 2.0
- Chris Rea, Stanford
- Paper: Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
- Paper: Plasticine: A Reconfigurable Architecture For Parallel Patterns
- Sign up for our AI Platforms eBook Series!
- TWiML Presents: NeurIPS Series page
- TWiML Online Meetup
- Register for the TWiML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0