Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar
EPISODE 757
|
DECEMBER
2,
2025
Watch
Follow
Share
About this Episode
In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet’s approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling.
About the Guest
Zain Asgar
Gimlet Labs
Resources
- Gimlet Labs Emerges from Stealth with 8-Figure Revenues, Fundamentally Shifting the Paradigm in How Agentic AI Workloads Are Run and Opening Up New Compute Capacity
- Gimlet Labs
- Benchmarking AI-generated CUDA kernels on an H100
- Speeding up PyTorch inference on Apple devices with AI-generated Metal kernels
- Dynamic Resource Allocation documentation
- NVIDIA NVLink and NVLink Switch
- NVIDIA Unveils Rubin CPX: A New Class of GPU Designed for Massive-Context Inference
- NVIDIA, Partners Drive Next-Gen Efficient Gigawatt AI Factories in Buildup for Vera Rubin
- MLIR
- The Torch-MLIR Project
- The LLVM Compiler Infrastructure
- Qualcomm's AI250 Attacks the AI Inference Memory Bottleneck | Durga Malladi Interview
- Qualcomm Unveils AI200 and AI250—Redefining Rack-Scale Data Center Inference Performance for the AI Era
- NVIDIA RTX 6000 Ada Generation Graphics Card
- NVIDIA DGX B200
- NVIDIA H100 GPU
- NVIDIA H200 GPU
- Intel® Gaudi® AI Accelerator Products
- Python
- LangChain
- LangSmith
- Hugging Face
- Vidrial (GitHub)
- Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750
- Closing the Loop Between AI Training and Inference with Lin Qiao - #742
