Common Sense Reasoning in NLP with Vered Shwartz
EPISODE 461
|
MARCH
4,
2021
Watch
Follow
Share
About this Episode
Today we're joined by Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science & Engineering at the University of Washington.
In our conversation with Vered, we explore her NLP research, where she focuses on teaching machines common sense reasoning in natural language. We discuss training using GPT models and the potential use of multimodal reasoning and incorporating images to augment the reasoning capabilities. Finally, we talk through some other noteworthy research in this field, how she deals with biases in the models, and Vered's future plans for incorporating some of the newer techniques into her future research.
About the Guest
Vered Shwartz
University of Washington
Resources
- Recent Breakthroughs and Uphill Battles in Modern Natural Language Processing
- Unsupervised Commonsense Question Answering with Self-Talk
- Do Neural Language Models Overcome Reporting Bias?
- "You are grounded!": Latent Name Artifacts in Pre-trained Language Models
- Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning
- Thinking Like a Skeptic: Defeasible Inference in Natural Language
- Commonsense Reasoning for Natural Language Processing
- Recent Breakthroughs and Uphill Battles in Modern Natural Language Processing
