Join our list for notifications and early access to events
Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at AWS. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, and what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.
You know AWS as a cloud computing technology leader, but did you realize the company offers a broad array of services and infrastructure at all three layers of the machine learning technology stack? AWS has been focused on making ML accessible to customers of all sizes and across industries, and over 100,000 of them trust AWS for machine learning and artificial intelligence services. AWS is constantly innovating across all areas of ML including infrastructure, tools on Amazon SageMaker, and AI services, such as Amazon CodeWhisperer, an AI-powered code companion that improves developer productivity by generating code recommendations based on the code and comments in an IDE. AWS also created purpose-built ML accelerators for the training (AWS Trainium) and inference (AWS Inferentia) of large language and vision models on AWS.
To learn more about AWS ML and AI services, and how they’re helping customers accelerate their machine learning journeys, visit twimlai.com/go/awsml.