Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute.
Subscribe: iTunes / Google Play / Spotify / RSS
In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.”
Thanks to our Sponsor!
Thanks to Georgian Partners for their continued support of the podcast and for sponsoring this series. Georgian Partners is a venture capital firm that invests in growth stage business software companies who use applied artificial intelligence, conversational AI, and trust to differentiate and advance their business solutions. Post investment, Georgian works closely with their portfolio companies to accelerate the adoption of these key technologies for increased value. To help their portfolio companies hire the right technical talent Georgian recently published “Building Conversational AI Teams,” a comprehensive guide to lead you through sourcing, acquiring and nurturing a successful conversational AI team. Check it out at twimlai.com/georgian.
Mentioned in the Interview
- Presentation: Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer
- Presentation: Learning Adversarially Fair and Transferrable Representations
- Paper: Learning Fair Representations
- Learn more about Building Conversational AI Teams with Georgian Partners
- Sign up for our AI Platforms eBook Series!
- TWIML Presents: AWS re:Invent Series page
- TWIML Online Meetup
- Register for the TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0