Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel
EPISODE 663
|
DECEMBER
26,
2023
Watch
Follow
Share
About this Episode
Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs.
About the Guest
Markus Nagel
Qualcomm AI Research
Thanks to our sponsor Qualcomm AI Research
Qualcomm AI Research is dedicated to advancing AI to make its core capabilities — perception, reasoning, and action — ubiquitous across devices. Their work makes it possible for billions of users around the world to have AI-enhanced experiences on devices powered by Qualcomm Technologies. To learn more about what Qualcomm Technologies is up to on the research front, visit twimlai.com/qualcomm.
Resources
- Paper: Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing
- Paper: Pruning vs Quantization: Which is Better?
- Paper: Scalarization for Multi-Task and Multi-Domain Learning at Scale
- Paper: EDGI: Equivariant Diffusion for Planning with Embodied Agents
- Paper: Geometric Algebra Transformers
- Paper: Deductive Verification of Chain-of-Thought Reasoning
- Blog post: Qualcomm at NeurIPS 2023: Cutting-edge research in generative and embodied AI, model efficiency, and more
- Demo video: World's fastest Stable Diffusion running on a phone
- Demo video: Enhanced video segmentation with on device learning
- Demo video: AI assistant with fast Llama 2 Chat 7B
- Neural Augmentation for Wireless Communication with Max Welling - #398
- Neural Network Quantization and Compression with Tijmen Blankevoort - #292