Recurrence and Attention for Long-Context Transformers with Jacob Buckman
EPISODE 750
|
OCTOBER
7,
2025
Watch
Follow
Share
About this Episode
Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and latent space attention. We explore the idea of weight-state balance and the weight-state FLOP ratio as a way of reasoning about the optimality of compute architectures, and we dig into the Power Retention architecture, which blends the parallelization of attention with the linear scaling of recurrence and promises speedups of >10x during training and >100x during inference. We review Manifest AI’s recent open source projects as well: Vidrial—a custom CUDA framework for building highly optimized GPU kernels in Python, and PowerCoder—a 3B-parameter coding model fine-tuned from StarCoder to use power retention. Our chat also covers the use of metrics like in-context learning curves and negative log likelihood to measure context utility, the implications of scaling laws, and the future of long context lengths in AI applications.
About the Guest
Jacob Buckman
Manifest AI
Resources
- Power Retention - Manifest AI
- Scaling Context Requires Rethinking Attention
- Power Retention (Github)
- Vidrial (GitHub)
- PowerCoder-3B (Huggingface)
- A Triton-Based Library for Hardware-Efficient Implementations of Linear Attention Mechanism
- FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
- FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
- Mamba: Linear-Time Sequence Modeling with Selective State Spaces
- Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
- Mojo
- CuTe NVIDIA CUTLASS Documentation
- DeepSeek-V3 Technical Report
- Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693