Today, by listener request, we’re joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University.
Subscribe: iTunes / Google Play / Spotify / RSS
Cynthia is passionate about both machine learning and social justice, with extensive work and leadership in both areas. In this episode we discuss her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’, and how interpretable models make for less error-prone and more comprehensible decisions. When these decisions impact the course of a human life, you’d better believe it’s important to understand the process for getting there. Cynthia breaks down black box and interpretable models, including their development, sample use cases, and her future plans in the field.
From the Interview
Don’t forget to register today for TWIMLcon! Beyond keynote interviews like Andrew Ng, Hussein Mehanna and Fran Bell, we’ve got a bunch of interesting speakers lined up to share their successes and failures helping their organizations build and productionalize ML and deep learning models. Check out the lineup: http://twimlcon.com/
Check it out
- Register for TWIMLcon: AI Platforms now!
- Download our AI Platforms eBook Series!
- Check out all of our great TWIML Presents: here!
- Join the Meetup
- Register for the TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0