Benchmarking Machine Learning with MLCommons with Peter Mattson

EPISODE 434
|
DECEMBER 7, 2020
Watch
Play
Don't Miss an Episode!  Join our mailing list for episode summaries and other updates.

About this Episode

Today we're joined by Peter Mattson, President of MLCommons and a Staff Engineer at Google. In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference. We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they're approaching this through the "People's Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.

About the Guest

Peter Mattson

MLCommons

Connect with Peter

Resources

Related Topics