We could not locate the page you were looking for.

Below we have generated a list of search results based on the page you were trying to reach.

404 Error
The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice. In this panel discussion, we bring together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way. Join us as we explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. We round out the session with an audience Q&A! Check out the list of resources below! https://www.youtube.com/embed/B2QBnVnbt7A Panelists: Rayid Ghani - Carnegie Mellon University Solon Barokas - Cornell, Microsoft Kush R. Varshney - IBM Alessya Labzhinova - Stealth Hima Lakkaraju - Harvard  Thank you to IBM for their support in helping to make this panel possible! IBM is committed to educating and supporting data scientists, and bringing them together to overcome technical, societal and career challenges. Through the IBM Data Science Community site, which has over 10,000 members, they provide a place for data scientists to collaborate, share knowledge, and support one another. IBM’s Data Science Community site is a great place to connect with other data scientists and to find information and resources to support your career. Join and get a free month of select IBM Programs on Coursera. Resources Rayid Ghani, Carnegie Mellon University - Professor in the Machine Learning Department (in the School of Computer Science) and the Heinz College of Information Systems and Public Policy Topic: Explainability Use Cases in Public Policy and Beyond Twitter: @rayidghani TWIML AI Podcast - #283 - Real World Model Explainability Solon Barocas, Cornell University - Assistant Professor, Department of Information Science, Principal Researcher at Microsoft Research Topic: Hidden Assumptions Behind Counterfactual Explanations Twitter: @s010n TWIML AI Podcast - #219 - Legal and Policy Implications of Model Interpretability: Resources: : The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons: Published at 2020 ACM Conference on Fairness, Accountability, and Transparency: Shorter version for the 2020 Workshop on Human Interpretability in Machine Learning (WHI) Additional References: Roles for Computing in Social Change. Published at the 2020 ACM Conference on Fairness, Accountability, and Transparency Textbook on Fairness and Machine Learning. Published by MIT Press. The Intuitive Appeal of Explainable Machines Kush R. Varshney, IBM, Distinguished Research Staff Member and Manager at IBM Thomas J. Watson Research Center Topic: Model Explainability as a Communications Challenge Twitter: @krvarshney Resources: IBM AI Fairness 360 IBM AI Explainability 360 IBM Adversarial Robustness 360 IBM AI FactSheets 360 Paper: On Mismatched Detection and Safe, Trustworthy Machine Learning Democast: Mitigating Discrimination and Bias with AI Fairness 360 Alessya Labzhinova, CEO of a stealth startup and former CTO in residence AI2 Topic: Stakeholder-Driven Explainability Resources: Explainable Machine Learning in Deployment, Bhatt et al. You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods, Dimanov et al. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods, Slack et al. - Causability and eExplainability of Artificial Intelligence in Medicine, Holzinger et al Getting a CLUE: A Method for Explaining Uncertainty Estimates, Antorán et al Hima Lakkaraju, Harvard  University Assistant Professor with appointments in Business School and Department of Computer Science Topic: Adversarial Attacks, Misleading Explanations, and Solutions Twitter: @hima_lakkaraju  TWIML AI Podcast - #387 - AI for High Stakes Decision Making Resources: Presentation Brief Slide Deck The slides also have references to these papers: Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations Robust and Stable Black Box Explanations
Continuing the live interviews from #TWIMLcon! Last night we sat down with a few of the awesome #TWIMLcon speakers, sponsors, and attendees to chat about what they are working on, their favorite TWIML podcast episode, the best #TWIMLcon session so far and more! Weiping Peng Software Architect at Salesforce, longtime TWIML podcast listener! Drew Bollinger & Mark Wronkiewicz working on Infrastructure and ML Modeling at Development Seed, using machine learning to analyze satellite images in the humanitarian and climate sphere (they are hiring!). Ameen Kazerouni Lead Data Scientist at Zappos - hear about his case study presentation yesterday at the conference! Vince Jeffs Senior Director, Product Strategy, Marketing AI & Decisioning at Pegasystems, former TWIML podcast guest (twimlai.com/talk/154)