Today, we’re excited to continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College.
Subscribe: iTunes / Google Play / Spotify / RSS
Alvin’s research is focused on computational linguistics, and we begin with a brief chat about some of his prior work on verb prediction using reinforcement learning. We then dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regularization. We also touch on the parallel between his work and the work being done on adversarial examples by Ian Goodfellow and others.
Mentioned in the Interview
- Paper: Pathologies of Neural Models Make Interpretations Difficult
- The Stanford Question Answering Dataset (SQuAD)
- Paper: Explaining and Harnessing Adversarial Examples
- Paper: Don’t Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation
- Paper: Incremental Prediction of Sentence-final Verbs: Humans versus Machines
- Paper: Interpretese vs. Translationese: The Uniqueness of Human Strategies in Simultaneous Interpretation
- Paper: Interpreting Neural Networks With Nearest Neighbors
- Paper: Syntax-based Rewriting for Simultaneous Machine Translation
- Paper: Collection of a Simultaneous Translation Corpus for Comparative Analysis
- Paper: Learning to Translate in Real-time with Neural Machine Translation
- STACL: Simultaneous Translation with Integrated Anticipation and Controllable Latency∗
- Paper: Prediction Improves Simultaneous Neural Machine Translation
- Black in AI
- Download our AI Platforms eBook Series!
- Check out all of our great series from 2018 at the TWIML Presents: Series page!
- TWIML Online Meetup
- Register for the TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0