Fairness in Machine Learning with Hanna Wallach

800 800 This Week in Machine Learning & AI

Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Along the way, Hanna points us to a TON of papers and resources to further explore the topic of fairness in ML. You’ll definitely want to check out the notes page for this episode, which you’ll find at twimlai.com/talk/232.

Thanks to our Sponsor!


We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai

About Hanna

Mentioned in the Interview

“More On That Later” by Lee Rosevere licensed under CC By 4.0

Leave a Reply

Your email address will not be published.