Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, a member of CSAIL and of the Theory of Computation group.
Subscribe: iTunes / Google Play / Spotify / RSS
Aleksander, whose work is more on the theoretical side of machine learning research, walks us through his paper “Adversarial Examples Are Not Bugs, They Are Features,” which was published previously presented at last year’s NeurIPS conference. In our conversation, we explore the idea of adversarial examples in machine learning systems being features, with results that might be undesirable, but still working as designed. We talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will inform opinions on either side of the deep learning debate.
Connect with Aleksander!
Resources
- Paper: Adversarial Examples Are Not Bugs, They Are Features (NeurIPS 2019)
- Madry Lab
- #119 – Adversarial Attacks Against Reinforcement Learning Agents with Ian Goodfellow & Sandy Huang
- Check out the ML Pulse Survey!
Join Forces!
- Join the TWIML Community!
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0
Neil
There doesnt seem to be an audio file in the rss for this episode.