Security and Safety in AI: Adversarial Examples, Bias and Trust with Moustapha Cissé

Banner Image: Moustapha Cissé - Podcast Interview

Join our list for notifications and early access to events

About this Episode

In this episode I'm joined by Moustapha Cissé, Research Scientist at Facebook AI Research Lab (or FAIR) Paris.

Moustapha's broad research interests include the security and safety of AI systems, and we spend some time discussing his work on adversarial examples and systems that are robust to adversarial attacks. More broadly, we discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases.

Connect with Moustapha Cissé
Read More

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *