Anticipating Superintelligence with Nick Bostrom
EPISODE 181
|
SEPTEMBER
17,
2018
Watch
Follow
Share
About this Episode
In this episode, we're joined by Nick Bostrom, professor in the faculty of philosophy at the University of Oxford, where he also heads the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics.
Nick is of course also author of the book "Superintelligence: Paths, Dangers, Strategies." In our conversation, we discuss the risks associated with Artificial General Intelligence and the more advanced AI systems Nick refers to as superintelligence. We also discuss Nick's writings on the topic of openness in AI development, and the advantages and costs of open and closed development on the part of nations and AI research organizations. Finally, we take a look at what good safety precautions might look like, and how we can create an effective ethics framework for superintelligent systems.
About the Guest
Nick Bostrom
Future of Humanity Institute
Connect with Nick