What’s Next in LLM Reasoning? with Roland Memisevic
EPISODE 646
|
SEPTEMBER
11,
2023
Watch
Follow
Share
About this Episode
Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents.
About the Guest
Roland Memisevic
Qualcomm AI Research
Resources
- Paper: Look, Remember and Reason: Visual Reasoning with Grounded Rationales
- Paper: Situated Real-time Interaction with a Virtually Embodied Avatar
- Paper: Painter: Teaching Auto-regressive Language Models to Draw Sketches
- Paper: Deductive Verification of Chain-of-Thought Reasoning
- Book: Thinking Fast and Slow; D. Kahneman
- Learning “Common Sense” and Physical Concepts with Roland Memisevic - #111
- Pixels to Concepts with Backpropagation with Roland Memisevic - #427
