AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio

EPISODE 654
WATCH
Play Video

Join our list for notifications and early access to events

About this Episode

Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understanding concepts like agency and sentience. Additionally, our conversation touches on solutions to AI safety, such as the need for robust safety guardrails, investments in national security protections and countermeasures, bans on systems with uncertain safety, and the development of governance-driven AI systems.

Connect with Yoshua
Read More

Related Episodes

Related Topics

More from TWIML

One Response

  1. At 45:50 Prof. Bengio returned to a theme that surfaced a number of times in his comments; he said, “it might get a better handle of what humans typically have in mind,” and then went on to talk about also taking into account a diversity of opinions. This seems like an impossible requirement to build against; if one thing seems apparent in our present world, is that there is very little agreement across appreciably sized groups of humans along almost every dimension that matters (even what matters is not broadly agreed upon). What does “alignment” mean, and how can we “align” AI systems, if we (humans) possess very little “alignment” among ourselves? Absent any foundational, broadly (universally?) agreed moral or ethical framework, it seems probable that AI systems will be “aligned” according to whatever replacement framework steps in to fill that void – and, by definition, since there is no universal foundational framework, that “alignment” will be essentially pragmatic in nature, suitable to the immanent motivations of whomever is building the system.

Leave a Reply

Your email address will not be published. Required fields are marked *