AI’s Legal and Ethical Implications with Sandra Wachter
EPISODE 521
|
SEPTEMBER
23,
2021
Watch
Follow
Share
About this Episode
Today we're joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford.
Sandra's work lies at the intersection of law and AI, focused on what she likes to call "algorithmic accountability". In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness, and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they're created. We also explore why factors like the lack of oversight lead to poor self-regulation and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon.
About the Guest
Sandra Wachter
Oxford Internet Institute, University of Oxford
Resources
- Paper: Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI
- Paper: Principles alone cannot guarantee ethical AI
- Operationalizing Human-Centered Perspectives in Explainable AI
- Paper: Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
- Paper: Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law
- Workshop: SEDL @ ICLR 2021
- A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI
