Assessing the Risks of Open AI Models with Sayash Kapoor
EPISODE 675
|
MARCH
11,
2024
Watch
Follow
Share
About this Episode
Today we’re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models.” We dig into the controversy around AI safety, the risks and benefits of releasing open model weights, and how we can establish common ground for assessing the threats posed by AI. We discuss the application of the framework presented in the paper to specific risks, such as the biosecurity risk of open LLMs, as well as the growing problem of "Non Consensual Intimate Imagery" using open diffusion models.
About the Guest
Sayash Kapoor
Center for Information Technology Policy at Princeton University
Resources
- Paper: On the Societal Impact of Open Foundation Models
- Open Source Initiative
- Paper: Will releasing the weights of future large language models grant widespread access to pandemic agents?
- Can large language models democratize access to dual-use biotechnology?
- Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools
- StopNCII.org
- Civitai.com
- Workshop on Responsible and Open Foundation Models
- Article: Can Chatbots Help You Build a Bioweapon?
- OSS-Fuzz
- Open letter: Joint Statement on AI Safety and Openness
- Open letter: A Safe Harbor for Independent AI Evaluation
- A Safe Harbor for Platform Research
- Bipartisan Framework for U.S. AI Act
- Letter: USDOJ Letter to USCO
- Aug ‘20 - AI and the Responsible Data Economy with Dawn Song - #403
- June ‘20 - 2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury - #381

