This week on the podcast we published my discussion with Oxford’s Nick Bostrom about his work in the growing field of AI safety and ethics. Bostrom heads up the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to the safe and ethical development and use of AI. He is, of course, also well-known as the author of the book Superintelligence: Paths, Dangers, Strategies.

While AI safety is still in its infancy, interest is growing quickly as general awareness of and attention to artificial general intelligence (AGI) has increased. As Bostrom points out at the beginning of our talk, there were hardly any resources being funneled into studying the effects of AGI at the beginning of his career. Today, though, the AI renaissance we’ve been experiencing has spurred a desire to understand and plan for a possible future with superintelligent machines.

Understanding the implications of superintelligence touches a broad set of disciplines such as political science, i.e. how superintelligent AI might lead to a dangerous arms race, to philosophy, i.e. what are appropriate ethics pertaining to our own treatment of a sentient AI, to computer science, i.e. how to build safety mechanisms into artificial general intelligence projects.

Towards the end of our chat, Nick shared some resources for those interested in exploring the topic further:

  • From the Governance of AI program, an organization that Bostrom spearheads alongside his colleague Allan Dafoe, came a report called The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. The report includes contributions from 26 authors from 14 institutions detailing ways in which the wider AI community can begin to address issues of ethics and safety in research, development, and policy initiatives.
  • In partnership with the Machine Learning Research Institute is a piece co-authored by Nick Bostrom and Eliezer Yudkowsky. The paper, The Ethics of Artificial Intelligence, offers a broad overview of the safety and ethical concerns regarding near-term and long-term AI development. It also addresses how to determine when technology is approaching AGI and what issues might arise that need to be considered.
  • A shorter read but worth checking out is this write up from the Effective Altruism organization, which Bostrom mentions in our talk. This organization researches and advocates on behalf of the best possible outcomes for humanity across a wide spectrum of disciplines. Their article, Potential Risks from Advanced AI, offers a look into how organizations focused on the wider human good are approaching the issue of AI safety.

I’d love to hear your take on the interview or the topic of superintelligence and AI safety. Please reach out and let me know what you think.