Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li
EPISODE 722
|
MARCH
10,
2025
Watch
Follow
Share
About this Episode
Today, we're joined by Chengzu Li, PhD student at the University of Cambridge to discuss his recent paper, “Imagine while Reasoning in Space: Multimodal Visualization-of-Thought.” We explore the motivations behind MVoT, its connection to prior work like TopViewRS, and its relation to cognitive science principles such as dual coding theory. We dig into the MVoT framework along with its various task environments—maze, mini-behavior, and frozen lake. We explore token discrepancy loss, a technique designed to align language and visual embeddings, ensuring accurate and meaningful visual representations. Additionally, we cover the data collection and training process, reasoning over relative spatial relations between different entities, and dynamic spatial reasoning. Lastly, Chengzu shares insights from experiments with MVoT, focusing on the lessons learned and the potential for applying these models in real-world scenarios like robotics and architectural design.
About the Guest
Chengzu Li
University of Cambridge
Resources
- Imagine while Reasoning in Space: Multimodal Visualization-of-Thought
- TopViewRS: Vision-Language Models as Top-View Spatial Reasoners
- Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models
- ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text Generation