About This Episode
Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the recently released Codex Model from OpenAI, the role that reasoning plays in solving this problem, and how it is implemented in the model. We also talk through various challenges like data augmentation, the complexity of the queries that Turing can produce, and a paper that explores the explainability of this model.
If you enjoyed this episode, you should definitely check out our conversation with Greg Brockman on “Codex, OpenAI’s Automated Code Generation API”.
Watch on Youtube
Connect with Yanshuai!
- Borealis AI
- Blog: Why is Cross-Domain Text-to-SQL Hard?
- Paper: TURING: an Accurate and Interpretable Multi-Hypothesis Cross-Domain Natural Language Database Interface
- Paper: Optimizing Deeper Transformers on Small Datasets
- Paper: Code Generation from Natural Language with Less Prior Knowledge and More Monolingual Data
- Paper: A Globally Normalized Neural Model for Semantic Parsing
- Spider 1.0
- Paper: RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers
- Paper: TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation
- Paper: Improving Transformer Optimization Through Better Initialization