About This Episode
Today we’re joined by Yejin Choi, a professor at the University of Washington. We had the pleasure of catching up with Yejin after her keynote interview at the recent Stanford HAI “Foundational Models” workshop. In our conversation, we explore her work at the intersection of natural language generation and common sense reasoning, including how she defines common sense, and what the current state of the world is for that research. We discuss how this could be used for creative storytelling, how transformers could be applied to these tasks, and we dig into the subfields of physical and social common sense reasoning. Finally, we talk through the future of Yejin’s research and the areas that she sees as most promising going forward.
If you enjoyed this episode, check out our conversation on AI Storytelling Systems with Mark Riedl.
Watch on Youtube
Connect with Yejin!
- Video: Workshop on Foundation Models (Session II: Technological Foundations)
- Stanford HAI Workshop on Foundational Models
- Paper: COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
- Paper: Social Chemistry 101: Learning to Reason about Social and Moral Norms
- Paper: 🍷MERLOT: Multimodal Neural Script Knowledge Models
- Paper: MAUVE: Human-Machine Divergence Curves for Evaluating Open-Ended Text Generation
- Paper: NaturalProofs: Mathematical Theorem Proving in Natural Language
- Paper: CommonsenseQA 2.0: Exposing the Limits of AI through Gamification