How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird

EPISODE 691
WATCH
Play Video

Join our list for notifications and early access to events

About this Episode

Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ‘Tay’ and ‘Bing Chat’ incidents along with her thoughts on the rapidly evolving GenAI landscape.

Connect with Sarah
Read More

Thanks to our sponsor Microsoft

I’d like to send a huge thanks to our friends at Microsoft for their support of the podcast and their sponsorship of today’s episode. Microsoft creates AI-powered platforms and tools to meet the evolving needs of customers, and is committed to making AI available broadly and doing so responsibly, in support of its mission to empower every person and every organization on the planet to achieve more. Explore the possibilities by visiting Microsoft.ai.

Microsoft Logo

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *