The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice.
In this panel discussion, we bring together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way.
Join us as we explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. We round out the session with an audience Q&A! Check out the list of resources below!
Panelists:
Rayid Ghani – Carnegie Mellon University
Solon Barokas – Cornell, Microsoft
Kush R. Varshney – IBM
Alessya Labzhinova – Stealth
Hima Lakkaraju – Harvard
Thank you to IBM for their support in helping to make this panel possible! IBM is committed to educating and supporting data scientists, and bringing them together to overcome technical, societal and career challenges. Through the IBM Data Science Community site, which has over 10,000 members, they provide a place for data scientists to collaborate, share knowledge, and support one another.
IBM’s Data Science Community site is a great place to connect with other data scientists and to find information and resources to support your career.
Join and get a free month of select IBM Programs on Coursera.
Resources
Rayid Ghani, Carnegie Mellon University – Professor in the Machine Learning Department (in the School of Computer Science) and the Heinz College of Information Systems and Public Policy
- Topic: Explainability Use Cases in Public Policy and Beyond
- Twitter: @rayidghani
- TWIML AI Podcast – #283 – Real World Model Explainability
Solon Barocas, Cornell University – Assistant Professor, Department of Information Science, Principal Researcher at Microsoft Research
- Topic: Hidden Assumptions Behind Counterfactual Explanations
- Twitter: @s010n
- TWIML AI Podcast – #219 – Legal and Policy Implications of Model Interpretability:
- Resources: :
-
- The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons: Published at 2020 ACM Conference on Fairness, Accountability, and Transparency:
- Shorter version for the 2020 Workshop on Human Interpretability in Machine Learning (WHI)
- Additional References:
- Roles for Computing in Social Change. Published at the 2020 ACM Conference on Fairness, Accountability, and Transparency
- Textbook on Fairness and Machine Learning. Published by MIT Press.
- The Intuitive Appeal of Explainable Machines
-
Kush R. Varshney, IBM, Distinguished Research Staff Member and Manager at IBM Thomas J. Watson Research Center
- Topic: Model Explainability as a Communications Challenge
- Twitter: @krvarshney
- Resources:
Alessya Labzhinova, CEO of a stealth startup and former CTO in residence AI2
- Topic: Stakeholder-Driven Explainability
- Resources:
- Explainable Machine Learning in Deployment, Bhatt et al.
- You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods, Dimanov et al.
- Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods, Slack et al. –
- Causability and eExplainability of Artificial Intelligence in Medicine, Holzinger et al
- Getting a CLUE: A Method for Explaining Uncertainty Estimates, Antorán et al
Hima Lakkaraju, Harvard University Assistant Professor with appointments in Business School and Department of Computer Science
- Topic: Adversarial Attacks, Misleading Explanations, and Solutions
- Twitter: @hima_lakkaraju
- TWIML AI Podcast – #387 – AI for High Stakes Decision Making
- Resources:
- Presentation Brief Slide Deck
- The slides also have references to these papers: