Language Understanding and LLMs with Christopher Manning

EPISODE 686
|
MAY 27, 2024
Watch
Play
Don't Miss an Episode!  Join our mailing list for episode summaries and other updates.

About this Episode

Today, we're joined by Christopher Manning, the Thomas M. Siebel professor in Machine Learning at Stanford University and a recent recipient of the 2024 IEEE John von Neumann medal. In our conversation with Chris, we discuss his contributions to foundational research areas in NLP, including word embeddings and attention. We explore his perspectives on the intersection of linguistics and large language models, their ability to learn human language structures, and their potential to teach us about human language acquisition. We also dig into the concept of “intelligence” in language models, as well as the reasoning capabilities of LLMs. Finally, Chris shares his current research interests, alternative architectures he anticipates emerging beyond the LLM, and opportunities ahead in AI research.

About the Guest

Christopher Manning

Machine Learning in the Departments of Linguistics and Computer Science at Stanford University; Stanford Artificial Intelligence Laboratory (SAIL); Stanford NLP group; Stanford Institute for Human-Centered Artificial Intelligence (HAI)

Connect with Christopher

Resources

Related Topics