Join our list for notifications and early access to events
In this episode of the podcast, we are joined by ChatGPT, the latest and coolest large language model developed by OpenAl. In our conversation with ChatGPT, we discuss the background and capabilities of large language models, the potential applications of these models, and some of the technical challenges and open questions in the field. We also explore the role of supervised learning in creating ChatGPT, and the use of PPO in training the model. Finally, we discuss the risks of misuse of large language models, and the best resources for learning more about these models and their applications. Join us for a fascinating conversation with ChatGPT, and learn more about the exciting world of large language models.
For the most part what you hear is a straight-through, first attempt, and not-at-all hand-crafted dialogue between the system and I, with a few small caveats that I share in the introduction to the episode.
To produce ChatGPT’s voice and avatar for this interview, we used a tool called Synthesia, which is designed to create training, marketing and support videos. It really couldn’t have been easier—I just pasted the text and clicked a button using the default voice and avatar. Shout out to the team there for helping me get set up.
Note: Some researchers and others object to the anthropomorphization of AI on the basis that it promotes misleading interpretations of and beliefs about what AI is and what its capacities are. Here I’m counting on the relative sophistication of TWIML listeners to understand that ChatGPT isn’t human, it’s a mathematical model that is really good at predicting words in response to a prompt. I then fed the output of its response into another mathematical model that takes a string of text and knows how to manipulate an avatar image to make it look like it is speaking.
One Response
Fantastic!