Today we’re joined by Nataniel Ruiz, a PhD Student in the Image & Video Computing group at Boston University.
Subscribe: iTunes / Google Play / Spotify / RSS
We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems,” which will be presented at the upcoming CVPR conference. In our conversation, we discuss the concept of this work, which essentially injects noise into an image to disrupt a generative model’s ability to manipulate said image. We also explore some of the challenging parts of implementing this work, a few potential scenarios in which this could be deployed, and the broader contributions that went into this work.
Connect with Nataniel!
Resources
- Paper: Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems
- Video: Disrupting Deepfakes: Adversarial Attacks Against Image Translation Networks
- Paper: Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
- StarGAN
- StyleGAN
- GANimation
- CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
- “Why Should I Trust You?”: Explaining the Predictions of Any Classifier (LIME)
Join Forces!
- Join the TWIML Community!
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0
Leave a Reply