The show is part of a series that I'm really excited about, in part because I've been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode i'm joined by Dario Amodei, Team Lead for Safety Research at OpenAI. While in San Francisco a few months ago, I spent some time at the OpenAI office, during which I sat down with Dario to chat about the work happening at OpenAI around AI safety.
Dario and I dive into the two areas of AI safety that he and his team are focused on--robustness and alignment. We also touch on his research with the Google DeepMind team, the OpenAI Universe tool, and how human interactions can be incorporated into reinforcement learning models. This was a great conversation, and along with the other shows in this series, this is a nerd alert show!