Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz
EPISODE 375
|
MAY
14,
2020
Watch
Follow
Share
About this Episode
Today we're joined by Nataniel Ruiz, a PhD Student in the Image & Video Computing group at Boston University.
We caught up with Nataniel to discuss his paper "Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems," which will be presented at the upcoming CVPR conference. In our conversation, we discuss the concept of this work, which essentially injects noise into an image to disrupt a generative model's ability to manipulate said image. We also explore some of the challenging parts of implementing this work, a few potential scenarios in which this could be deployed, and the broader contributions that went into this work.
About the Guest
Nataniel Ruiz
Resources
- Paper: Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems
- Video: Disrupting Deepfakes: Adversarial Attacks Against Image Translation Networks
- Paper: Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
- StarGAN
- StyleGAN
- GANimation
- CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
- "Why Should I Trust You?": Explaining the Predictions of Any Classifier (LIME)