Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine.
Subscribe: iTunes / Google Play / Spotify / RSS
Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin.
Connect with Sameer!
- Paper: Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
- Paper: “Why Should I Trust You?”: Explaining the Predictions of Any Classifier (LIME)
- Paper: Universal Adversarial Triggers for Attacking and Analyzing NLP
- Paper: AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
- #007 – Explaining the Predictions of Machine Learning Models with Carlos Guestrin
- Project Page: Universal Adversarial Triggers for Attacking and Analyzing NLP
- Semantically Equivalent Adversarial Rules for Debugging NLP Models
- #387 – AI for High-Stakes Decision Making with Hima Lakkaraju
- Join us at TWIMLfest: A Virtual AI Festival!
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0