Dataflow Computing for AI Inference with Kunle Olukotun
EPISODE 751
|
OCTOBER
14,
2025
Watch
Follow
Share
About this Episode
In this episode, we're joined by Kunle Olukotun, professor of electrical engineering and computer science at Stanford University and co-founder and chief technologist at Sambanova Systems, to discuss reconfigurable dataflow architectures for AI inference. Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs. We explore how this architecture is well-suited for LLM inference, reducing memory bandwidth bottlenecks and improving performance. Kunle reviews how this system also enables efficient multi-model serving and agentic workflows through its large, tiered memory and fast model-switching capabilities. Finally, we discuss his research into future dynamic reconfigurable architectures, and the use of AI agents to build compilers for new hardware.
About the Guest
Kunle Olukotun
SambaNova Systems, Stanford University
Resources
- SambaNova Systems
- Reconfigurable Dataflow Units (RDUs) — purpose-built for AI
- OptiML: An Implicitly Parallel Domain-Specific Language for Machine Learning
- Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
- Llama-3.1-8B
- DeepSeek-R1
- NVIDIA cuDNN
- CrewAI
- AutoGen
- Designing Computer Systems for Software with Kunle Olukotun - #211