The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice.
In this panel discussion, we’re bringing together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way.
Join us as we explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. We round out the session with an audience Q&A!
Solon Barokas, Microsoft
Rayid Ghani, Carnegie Mellon University
Alessya Labzhinova, Stealth
Hima Lakkaraju, Harvard
Kush R. Varshney, IBM
Thank you to IBM for their support in helping to make this panel possible! IBM is committed to educating and supporting data scientists, and bringing them together to overcome technical, societal and career challenges. Through the IBM Data Science Community site, which has over 10,000 members, they provide a place for data scientists to collaborate, share knowledge, and support one another.
IBM’s Data Science Community site is a great place to connect with other data scientists and to find information and resources to support your career.
Join and get a free month of select IBM Programs on Coursera.