Security and Safety in AI: Adversarial Examples, Bias and Trust with Moustapha Cissé

EPISODE 108
|
FEBRUARY 5, 2018
Watch
Banner Image: Moustapha Cissé - Podcast Interview
Don't Miss an Episode!  Join our mailing list for episode summaries and other updates.

About this Episode

In this episode I'm joined by Moustapha Ciss�, Research Scientist at Facebook AI Research Lab (or FAIR) Paris. Moustapha's broad research interests include the security and safety of AI systems, and we spend some time discussing his work on adversarial examples and systems that are robust to adversarial attacks. More broadly, we discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases.

About the Guest

Connect with Moustapha

Resources

Related Topics