Benchmarking Machine Learning with MLCommons with Peter Mattson

EPISODE 434
WATCH
Play Video

Join our list for notifications and early access to events

About this Episode

Today we're joined by Peter Mattson, President of MLCommons and a Staff Engineer at Google.

In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference.

We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they're approaching this through the "People's Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.

Connect with Peter
Read More

Thanks to our sponsor Pachyderm

Pachyderm is an enterprise-grade, open source data science platform that makes explainable, repeatable, and scalable machine learning and artificial intelligence possible. The platform brings together version control for data with the tools to build scalable end-to-end machine learning and artificial intelligence pipelines while empowering users to use any language, framework, or tool they want. The company is headquartered in San Francisco and is backed by Benchmark, M12, YCombinator and others.
Pachyderm Logo

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *