Infrastructure Scaling and Compound AI Systems with Jared Quincy Davis
EPISODE 740
|
JULY
22,
2025
Watch
Follow
Share
About this Episode
In this episode, Jared Quincy Davis, founder and CEO at Foundry, introduces the concept of "compound AI systems," which allows users to create powerful, efficient applications by composing multiple, often diverse, AI models and services. We discuss how these "networks of networks" can push the Pareto frontier, delivering results that are simultaneously faster, more accurate, and even cheaper than single-model approaches. Using examples like "laconic decoding," Jared explains the practical techniques for building these systems and the underlying principles of inference-time scaling. The conversation also delves into the critical role of co-design, where the evolution of AI algorithms and the underlying cloud infrastructure are deeply intertwined, shaping the future of agentic AI and the compute landscape.
About the Guest
Jared Quincy Davis
Foundry
Resources
- Introducing Project Ember: a compositional framework for compound AI systems
- Foundry
- Introducing Project Ember: a compositional framework for compound AI systems
- Networks of Networks: Complexity Class Principles Applied to Compound AI Systems Design
- Optimizing Model Selection for Compound AI Systems
- Are More LLM Calls All You Need? Towards Scaling Laws of Compound Inference Systems
- Laconic decoding (Alex Dimakis' Tweet)
- SWE-bench
- DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
