High-Efficiency Diffusion Models for On-Device Image Generation and Editing with Hung Bui
EPISODE 753
|
OCTOBER
28,
2025
Watch
Follow
Share
About this Episode
In this episode, Hung Bui, Technology Vice President at Qualcomm, joins us to explore the latest high-efficiency techniques for running generative AI, particularly diffusion models, on-device. We dive deep into the technical challenges of deploying these models, which are powerful but computationally expensive due to their iterative sampling process. Hung details his team's work on SwiftBrush and SwiftEdit, which enable high-quality text-to-image generation and editing in a single inference step. He explains their novel distillation framework, where a multi-step teacher model guides the training of an efficient, single-step student model. We explore the architecture and training, including the use of a secondary 'coach' network that aligns the student's denoising function with the teacher's, allowing the model to bypass the iterative process entirely. Finally, we discuss how these efficiency breakthroughs pave the way for personalized on-device agents and the challenges of running reasoning models with techniques like inference-time scaling under a fixed compute budget.
About the Guest
Hung Bui
Qualcomm
Resources
- Qualcomm AI Research
- SwiftBrush: One-Step Text-to-Image Diffusion Model with Variational Score Distillation
- SwiftEdit: Lightning Fast Text-Guided Image Editing via One-Step Diffusion
- PhoGPT: Generative Pre-training for Vietnamese
- SwiftBrush v2: Make Your One-step Diffusion Model Better Than Its Teacher
- Qualcomm AI Residency Program
- Introducing ChatGPT
- Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738
