In this episode, we’re joined by Nick Bostrom, professor in the faculty of philosophy at the University of Oxford, where he also heads the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics.
Subscribe: iTunes / Google Play / Spotify / RSS
Nick is of course also author of the book “Superintelligence: Paths, Dangers, Strategies.” In our conversation, we discuss the risks associated with Artificial General Intelligence and the more advanced AI systems Nick refers to as superintelligence. We also discuss Nick’s writings on the topic of openness in AI development, and the advantages and costs of open and closed development on the part of nations and AI research organizations. Finally, we take a look at what good safety precautions might look like, and how we can create an effective ethics framework for superintelligent systems.
TWIML Online Meetup Update!
A few meetup related announcements:
- Our next TWIML Online Meetup is coming up on September 19th, and will feature David Clement presenting the paper “DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills.” This should be a fun one and I encourage you to join us.
-
Our second session of the Fast.ai study group is starting soon. If you have some coding experience and would like to learn state of the art deep learning, this is a great way to do it.
Join both of these great groups by signing up at twimlai.com/meetup!
About Nick
Mentioned in the Interview
- Paper: Strategic Implications of Openness in AI Development
- Superintelligence: Paths, Dangers, Strategies
- TWIML Presents: Series page
- TWIML Events Page
- TWIML Meetup
- TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0
Leave a Reply