Scalable Differential Privacy for Deep Learning with Nicolas Papernot

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

In this episode of our Differential Privacy series, I’m joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University.

Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models.

Thanks to our Sponsor!

Thanks to Georgian Partners for their continued support of the podcast and for sponsoring this series. Georgian Partners is a venture capital firm that invests in growth stage business software companies in the US and Canada. Post investment, Georgian works closely with portfolio companies to accelerate adoption of key technologies including machine learning and differential privacy. To help their portfolio companies provide privacy guarantees to their customers, Georgian recently launched its first software product, Epsilon, which is a differentially private machine learning solution. You’ll learn more about Epsilon in my interview with Georgian’s Chang Liu later this week, but if you find this field interesting, I’d encourage you to visit the differential privacy resource center they’ve set up at

About Nicolas

Mentioned in the Interview

“More On That Later” by Lee Rosevere licensed under CC By 4.0

Leave a Reply

Your email address will not be published.