Autoformalization and Verifiable Superintelligence with Christian Szegedy
EPISODE 745
|
SEPTEMBER
2,
2025
Watch
Follow
Share
About this Episode
In this episode, Christian Szegedy, Chief Scientist at Morph Labs, joins us to discuss how the application of formal mathematics and reasoning enables the creation of more robust and safer AI systems. A pioneer behind concepts like the Inception architecture and adversarial examples, Christian now focuses on autoformalization—the AI-driven process of translating mathematical concepts from their human-readable form into rigorously formal, machine-verifiable logic. We explore the critical distinction between the informal reasoning of current LLMs, which can be prone to errors and subversion, and the provably correct reasoning enabled by formal systems. Christian outlines how this approach provides a robust path toward AI safety and also creates the high-quality, verifiable data needed to train models capable of surpassing human scientists in specialized domains. We also delve into his predictions for achieving this superintelligence and his ultimate vision for AI as a tool that helps humanity understand itself.
About the Guest
Christian Szegedy
Morph Labs
Resources
- Morph Labs
- Rethinking the Inception Architecture for Computer Vision
- Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Explaining and Harnessing Adversarial Examples
- Autoformalization with Large Language Models
- Morph Cloud
- The abc conjecture almost always — autoformalized
- Maxwell's Equations