Today we’re joined by Kunle Olukotun, Professor in the department of Electrical Engineering and Computer Science at Stanford University, and Chief Technologist at Sambanova Systems.
Subscribe: iTunes / Google Play / Spotify / RSS
Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. We cover the limitations of the current hardware such as GPUs, and peer a bit into the future as well. This was a fun one!
About Kunle
Mentioned in the Interview
- Sambanova Systems
- Slides: Designing Computer Systems for Software 2.0
- MatLab
- OptiML
- Chris Rea, Stanford
- Paper: Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
- Paper: Plasticine: A Reconfigurable Architecture For Parallel Patterns
- Sign up for our AI Platforms eBook Series!
- TWIML Presents: NeurIPS Series page
- TWIML Online Meetup
- Register for the TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0
anya chaliotis
i really enjoyed this non-linear interview. in a way it felt like watching ‘Memento’ – a random piece here, a random piece there, and it all comes together at the end…and makes me go back and listen to the podcast again. for somebody who doesn’t specialize in hardware, i now have a much deeper understanding of how HW is designed to work. i won’t be canceling a GPU order at work tomorrow, but now i will feel like an educated consumer of this temporary technology.
additional thanks for the concise clarity of what differentiates DL from classic ML – it’s non-convex!
the hogwild idea is really-really wild. i laughed with u. thanks again for a very entertaining / educational hour.