How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird

EPISODE 691
|
JULY 1, 2024
Watch
Play
Don't Miss an Episode!  Join our mailing list for episode summaries and other updates.

About this Episode

Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ‘Tay’ and ‘Bing Chat’ incidents along with her thoughts on the rapidly evolving GenAI landscape.

About the Guest

Sarah Bird

Microsoft

Connect with Sarah

Thanks to our sponsor Microsoft

I’d like to send a huge thanks to our friends at Microsoft for their support of the podcast and their sponsorship of today’s episode. Microsoft is your gateway to the future through cutting-edge AI technology! From virtual assistants to groundbreaking machine learning, Microsoft is a leader in AI innovation. The company is committed to ensuring responsible AI use, empowering people worldwide, including startups and digital natives, with intelligent technology to tackle societal challenges in sustainability, accessibility, and humanitarian action. Microsoft technologies empower you, your startup, and digital community to achieve more and innovate boundlessly. Explore the possibilities by visiting Microsoft.ai

Resources

Related Topics