TWIML Model Explainability Forum
EPISODE 401
|
AUGUST
17,
2020
Watch
Follow
Share
About this Episode
Today we're bringing you the latest TWIML Discussion Series, The Model Explainability Forum.
The use of machine learning in business, government, and other settings that require users to understand the model's predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice.
In this panel discussion, we bring together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way. We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. We round out the session with an audience Q&A! Check out the list of resources below!
About the Guests
Solon Barocas
Microsoft Research
Rayid Ghani
Carnegie Mellon University
Connect with Hima
Alessya Labzhinova
WhyLabs
Kush Varshney
IBM Research
Resources
-
- The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons: Published at 2020 ACM Conference on Fairness, Accountability, and Transparency:
- Shorter version for the 2020 Workshop on Human Interpretability in Machine Learning (WHI)
- Additional References:
- Roles for Computing in Social Change. Published at the 2020 ACM Conference on Fairness, Accountability, and Transparency
- Textbook on Fairness and Machine Learning. Published by MIT Press.
- The Intuitive Appeal of Explainable Machines
- IBM AI Fairness 360
- IBM AI Explainability 360
- IBM Adversarial Robustness 360
- IBM AI FactSheets 360
- Paper: On Mismatched Detection and Safe, Trustworthy Machine Learning
- Democast: Mitigating Discrimination and Bias with AI Fairness 360
- Explainable Machine Learning in Deployment, Bhatt et al.
- You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods, Dimanov et al.
- Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods, Slack et al. -
- Causability and eExplainability of Artificial Intelligence in Medicine, Holzinger et al
- Getting a CLUE: A Method for Explaining Uncertainty Estimates, Antorán et al
- Presentation Brief Slide Deck
- The slides also have references to these papers:
