About This Episode
Today we’re joined by return guest Greg Brockman, co-founder and CTO of OpenAI. We had the pleasure of reconnecting with Greg on the heels of the announcement of Codex, OpenAI’s most recent release. Codex is a direct descendant of GPT-3 that allows users to do autocomplete tasks based on all of the publicly available text and code on the internet. In our conversation with Greg, we explore the distinct results Codex sees in comparison to GPT-3, relative to the prompts it’s being given, how it could evolve given different types of training data, and how users and practitioners should think about interacting with the API to get the most out of it. We also discuss Copilot, their recent collaboration with Github that is built on Codex, as well as the implications of Codex on coding education, explainability, and broader societal issues like fairness and bias, copyrighting, and jobs.
Watch on Youtube
Connect with Greg!
- OpenAI Codex
- Paper: Evaluating Large Language Models Trained on Code
- GitHub Copilot
- Human Eval dataset
- A Sociological Study of the Official History of the Perceptrons Controversy
- Towards Artificial General Intelligence with Greg Brockman
- TWIML Presents: OpenAI