AI Trends 2024: Machine Learning & Deep Learning with Thomas G. Dietterich
EPISODE 666
|
JANUARY
8,
2024
Watch
Follow
Share
About this Episode
Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don’t miss Tom’s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field.
About the Guest
Thomas G. Dietterich
Oregon State University
Resources
- Paper: Sparks of Artificial General Intelligence: Early experiments with GPT-4
- Paper: Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve
- Paper: Dissociating language and thought in large language models
- Paper: A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification
- Paper: What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
- Paper: DEUP: Direct Epistemic Uncertainty Prediction
- Paper: Cognitive Mirage: A Review of Hallucinations in Large Language Models
- Paper: A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
- Paper: Language Models (Mostly) Know What They Know
- Paper: SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
- Paper: LM-Polygraph: Uncertainty Estimation for Language Models
- Paper: The Internal State of an LLM Knows When It's Lying
- Paper: Open-Set Recognition: a Good Closed-Set Classifier is All You Need?
- Paper: Sources of Uncertainty in Machine Learning - A Statisticians' View
- Talk: What’s Wrong with Large Language Models and What We Should Be Building Instead
- Paper: Survey of Hallucination in Natural Language Generation
- Paper: On Hallucination and Predictive Uncertainty in Conditional Language Generation
- Paper: BARTScore: Evaluating Generated Text as Text Generation
- What Does it Mean for a Machine to “Understand”? with Thomas G. Dietterich - #315
- AI Trends 2024: Computer Vision with Naila Murray - #665
- Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620

