Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at AWS. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, and what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.
You know AWS as a cloud computing technology leader, but did you realize the company offers a broad array of services and infrastructure at all three layers of the machine learning technology stack. In fact, tens of thousands of customers trust AWS for machine learning and artificial intelligence services, and the company aims to put ML in the hands of every practitioner with innovative services like Amazon CodeWhisperer, a new ML-powered pair programming tool that helps developers improve productivity by significantly reducing the time to build software applications.
To learn more about AWS ML and AI services, and how they’re helping customers accelerate their machine learning journeys, visit twimlai.com/go/awsml.