Designing Computer Systems for Software with Kunle Olukotun

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Kunle Olukotun, Professor in the department of Electrical Engineering and Computer Science at Stanford University, and Chief Technologist at Sambanova Systems.

Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. We cover the limitations of the current hardware such as GPUs, and peer a bit into the future as well. This was a fun one!

About Kunle

Mentioned in the Interview

“More On That Later” by Lee Rosevere licensed under CC By 4.0

1 comment
  • anya chaliotis

    i really enjoyed this non-linear interview. in a way it felt like watching ‘Memento’ – a random piece here, a random piece there, and it all comes together at the end…and makes me go back and listen to the podcast again. for somebody who doesn’t specialize in hardware, i now have a much deeper understanding of how HW is designed to work. i won’t be canceling a GPU order at work tomorrow, but now i will feel like an educated consumer of this temporary technology.

    additional thanks for the concise clarity of what differentiates DL from classic ML – it’s non-convex!

    the hogwild idea is really-really wild. i laughed with u. thanks again for a very entertaining / educational hour.

Leave a Reply

Your email address will not be published.