Today, by listener request, we're joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University.
Cynthia is passionate about both machine learning and social justice, with extensive work and leadership in both areas. In this episode we discuss her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions', and how interpretable models make for less error-prone and more comprehensible decisions. When these decisions impact the course of a human life, you'd better believe it's important to understand the process for getting there. Cynthia breaks down black box and interpretable models, including their development, sample use cases, and her future plans in the field.