Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University.
Subscribe: iTunes / Google Play / Spotify / RSS
Solon is also the co-founder of the Fairness, Accountability, and Transparency in Machine Learning workshop that is hosted annually at conferences like ICML. Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we discuss the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines,” which proposes that explainability is really two problems, inscrutability and non-intuitiveness, and that disentangling the two allows us to better reason about the kind of explainability that’s really needed in any given situation.
Mentioned in the Interview
- Fairness, Accountability, and Transparency in Machine Learning
- Helen Nissenbaum
- Paper: The Intuitive Appeal of Explainable Machines
- Paper: Big Data’s Disparate Impact
- Paper: Fairness and Machine Learning
- Paper: Problem Formulation and Fairness
- Check out all of our great series from 2018 at the TWiML Presents: Series page!
- TWiML Online Meetup
- Register for the TWiML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0