Are LLMs Good at Causal Reasoning? with Robert Ness
EPISODE 638
|
JULY
17,
2023
Watch
Follow
Share
About this Episode
Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, Professor at Northeastern University, and Founder of Altdeep.ai. In our conversation with Robert, we explore whether large language models, specifically GPT-3, 3.5, and 4, are good at causal reasoning. We discuss the benchmarks used to evaluate these models and the limitations they have in answering specific causal reasoning questions, while Robert highlights the need for access to weights, training data, and architecture to correctly answer these questions. The episode discusses the challenge of generalization in causal relationships and the importance of incorporating inductive biases, explores the model's ability to generalize beyond the provided benchmarks, and the importance of considering causal factors in decision-making processes.
About the Guest
Robert Ness
Microsoft Research, Northeastern University
Resources
- Causal AI Book
- Paper: Causal Reasoning and Large Language Models: Opening a New Frontier for Causality
- Repository: Benchmarking causal discovery using ChatGPT: The cause-effect pairs challenge
- Video: Opening a New Frontier for Causality - CCAIM Seminar Series - Dr. Amit Sharma & Dr. Emre Kiciman - Microsoft Research
- Atticus Geiger
- Repository: microsoft/guidance: A guidance language for controlling large language models
- Causality 101 with Robert Ness
- AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness

