Fairness in Machine Learning with Hanna Wallach
EPISODE 232
|
FEBRUARY
20,
2019
Watch
Follow
Share
About this Episode
Today we're joined by Hanna Wallach, a Principal Researcher at Microsoft Research.
Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of "fair" ML models can actually be achieved in practice, and much more. Along the way, Hanna points us to a TON of papers and resources to further explore the topic of fairness in ML. You'll definitely want to check out the notes page for this episode, which you'll find at twimlai.com/talk/232.
About the Guest
Hanna Wallach
Microsoft Research
Resources
- Paper: Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
- Story: Amazon scraps secret AI recruiting tool that showed bias against women
- ProPublica: Machine Bias
- Paper: Discrimination in Online Ad Delivery
- Paper: Semantics derived automatically from language corpora contain human-like biases
- Paper: Distributed Representations of Words and Phrases and their Compositionality
- Paper: Unequal Representation and Gender Stereotypes in Image Search Results for Occupations
- Jenn Wortman Vaughn - Tutorial: Challenges of incorporating algorithmic fairness into industry practice
- Paper: A Reductions Approach to Fair Classification
- Paper: Improving fairness in machine learning systems: What do industry practitioners need?
- ACM Conference on Fairness, Accountability, and Transparency ACM FAT*)
- AI Now Institute
- danah boyd
- Partnership on AI
