Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini

EPISODE 618
|
FEBRUARY 27, 2023
Watch
Play
Don't Miss an Episode!  Join our mailing list for episode summaries and other updates.

About this Episode

Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.

About the Guest

Nicholas Carlini

Google DeepMind

Connect with Nicholas

Resources

Related Topics