Pathologies of Neural Models and Interpretability with Alvin Grissom II
EPISODE 229
|
FEBRUARY
11,
2019
Watch
Follow
Share
About this Episode
Today, we're excited to continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College.
Alvin's research is focused on computational linguistics, and we begin with a brief chat about some of his prior work on verb prediction using reinforcement learning. We then dive into the paper he presented at the workshop, "Pathologies of Neural Models Make Interpretations Difficult." We talk through some of the "pathological behaviors" he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regularization. We also touch on the parallel between his work and the work being done on adversarial examples by Ian Goodfellow and others.
About the Guest
Alvin Grissom II
Haverford College
Resources
- Paper: Pathologies of Neural Models Make Interpretations Difficult
- The Stanford Question Answering Dataset (SQuAD)
- Paper: Explaining and Harnessing Adversarial Examples
- Paper: Don't Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation
- Paper: Incremental Prediction of Sentence-final Verbs: Humans versus Machines
- Paper: Interpretese vs. Translationese: The Uniqueness of Human Strategies in Simultaneous Interpretation
- Paper: Interpreting Neural Networks With Nearest Neighbors
- Paper: Syntax-based Rewriting for Simultaneous Machine Translation
- Paper: Collection of a Simultaneous Translation Corpus for Comparative Analysis
- Paper: Learning to Translate in Real-time with Neural Machine Translation
- STACL: Simultaneous Translation with Integrated Anticipation and Controllable Latency∗
- Paper: Prediction Improves Simultaneous Neural Machine Translation
- Black in AI
