Transformer-Based Transform Coding with Auke Wiggers
EPISODE 570
|
MAY
2,
2022
Watch
Follow
Share
About this Episode
Today we’re joined by Auke Wiggers, an AI research scientist at Qualcomm. In our conversation with Auke, we discuss his team’s recent research on data compression using generative models. We discuss the relationship between historical compression research and the current trend of neural compression, and the benefit of neural codecs, which learn to compress data from examples. We also explore the performance evaluation process and the recent developments that show that these models can operate in real-time on a mobile device. Finally, we discuss another ICLR paper, “Transformer-based transform coding”, that proposes a vision transformer-based architecture for image and video coding, and some of his team’s other accepted works at the conference.
About the Guest
Auke Wiggers
Qualcomm
Resources
- Paper: Video Compression With Rate-Distortion Autoencoders
- Paper: Feedback Recurrent Autoencoder for Video Compression
- Paper: Extending Neural P-frame Codecs for B-frame Coding
- Paper: Transformer-based Transform Coding
- Paper: ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning
- Paper: A Program to Build E(N)-Equivariant Steerable CNNs
- Intra-frame demo CVPR 2021 (image codec independently applied to each frame)
- Inter-frame demo around NeurIPS 2021 (proper video coding)
- Full-Stack AI Systems Development with Murali Akula - #563
- Deep Learning is Eating 5G. Here's How, w/ Joseph Soriaga - #525
- Natural Graph Networks with Taco Cohen - Episode #440
- Neural Augmentation for Wireless Communication with Max Welling - #398
- Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling - #267