Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies.
Subscribe: iTunes / Google Play / Spotify / RSS
In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at the event. We explore the papers Skip-Convolutions for Efficient Video Processing, which looks at training discrete variables end to end into visual neural networks. We also discuss his work on his FrameExit paper, which proposes a conditional early exiting framework for efficient video recognition.
Thanks to our Sponsor!
I’d like to send a huge thank you to our friends at Qualcomm Technologies for their continued support of the podcast, and their sponsorship of this series of podcasts from the CVPR conference! Qualcomm AI Research is dedicated to advancing AI to make its core capabilities — perception, reasoning, and action — ubiquitous across devices. Their work makes it possible for billions of users around the world to have AI-enhanced experiences on devices powered by Qualcomm Technologies. To learn more about what Qualcomm Technologies is up to on the research front, visit twimlai.com/qualcomm.
Connect with Amir!
- Paper: Skip-Convolutions for Efficient Video Processing
- Paper: FrameExit: Conditional Early Exiting for Efficient Video Recognition
- Paper: InverseForm: A Loss Function for Structured Boundary-Aware Segmentation
- Blog: What’s new with our AI Open Source: AIMET enhancements and code from paperss
- Blog: World’s first software-based neural video decoder running HD format in real-time on a commercial smartphone
- Blog: Making it possible to efficiently analyze video with AI