AI for High-Stakes Decision-Making with Hima Lakkaraju
EPISODE 387
|
JUNE
29,
2020
Watch
Follow
Share
About this Episode
Today we're joined by Hima Lakkaraju, an Assistant Professor at Harvard University with appointments in both the Business School and Department of Computer Science.
At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what these attacks look like. We also discuss people's tendency to trust computer systems and their outputs, her thoughts on collaborator (and former TWIML guest) Cynthia Rudin's theory that we shouldn't use black-box algorithms, and much more.
About the Guest
Connect with Hima
Resources
- Fair, Data Efficient and Trusted Computer Vision Workshop - CVPR 2020
- Presentation: Understanding the Perils of Black Box Explanations
- #290 - The Problem with Black Boxes w/ Cynthia Rudin
- #110 - Trust in Human-Robot/AI Interactions with Ayanna Howard
- Paper - "Why Should I Trust You?": Explaining the Predictions of Any Classifier
- Paper: A Unified Approach to Interpreting Model Predictions
- Paper: Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
- Paper: "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
