Library

Play

Mitigating Discrimination and Bias with AI Fairness 360

Democast

This month, we had the pleasure of chatting with Karthi Natesan Ramamurthy, a research staff member at the IBM TJ Watson Research Center, and one of the architects of today’s demo topic, IBM’s AI Fairness 360 Toolkit. We had the opportunity to get an early look at 360 leading up to, and during, [TWIMLcon: AI Platforms](https://twimlcon.com/) last year, where Trisha Mahoney presented on the topic. You can find our conversation with Trisha [here](https://twimlcon.com/twimlcon-shorts-rosie-pongracz-trisha-mahoney-ibm/), and for her full presentation, you can purchase the TWIMLcon video pass [here](https://twimlcon.com/videos/). In our conversation with Karthi, we explore some of the ins-and-outs of the toolkit, including: - The decision to open-source the toolkit - The various “Bias Mitigation” algorithms included in the toolkit - “Fairness” Metrics - Use cases for AI Fairness 360 - The paper behind the toolkit: [AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias](https://arxiv.org/pdf/1810.01943) Below you’ll find the resources mentioned in the conversation, and please feel free to send any feedback on this conversation or the Democast format in a comment below, or via Twitter at [@samcharrington](https://twitter.com/samcharrington) or [@twimlai](https://twitter.com/twimlai). - [Connect with Karthi!](https://researcher.watson.ibm.com/researcher/view.php?person=us-knatesa) - [AI Fairness 360 Open Source Toolkit](https://aif360.mybluemix.net/) - [TWIMLcon Shorts: Rosie Pongracz & Trisha Mahoney, IBM](https://twimlcon.com/twimlcon-shorts-rosie-pongracz-trisha-mahoney-ibm/) - [Paper: AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias](https://arxiv.org/pdf/1810.01943)

Session Speakers

Karthikeyan Natesan Ramamurthy

IBM Thomas J Watson Research Center

Connect with Karthikeyan