Legal and Policy Implications of Model Interpretability with Solon Barocas

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University.

Solon is also the co-founder of the Fairness, Accountability, and Transparency in Machine Learning workshop that is hosted annually at conferences like ICML. Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we discuss the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines,” which proposes that explainability is really two problems, inscrutability and non-intuitiveness, and that disentangling the two allows us to better reason about the kind of explainability that’s really needed in any given situation.

About Solon

Mentioned in the Interview

“More On That Later” by Lee Rosevere licensed under CC By 4.0

Leave a Reply

Your email address will not be published.