Pathologies of Neural Models and Interpretability with Alvin Grissom II

EPISODE 229
|
FEBRUARY 11, 2019
Watch
Banner Image: Alvin Grissom II - Podcast Interview
Don't Miss an Episode!  Join our mailing list for episode summaries and other updates.

About this Episode

Today, we're excited to continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College. Alvin's research is focused on computational linguistics, and we begin with a brief chat about some of his prior work on verb prediction using reinforcement learning. We then dive into the paper he presented at the workshop, "Pathologies of Neural Models Make Interpretations Difficult." We talk through some of the "pathological behaviors" he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regularization. We also touch on the parallel between his work and the work being done on adversarial examples by Ian Goodfellow and others.

About the Guest

Alvin Grissom II

Haverford College

Connect with Alvin

Resources

Related Topics