Skip to content

[ Back to index ]

Click here to see the table of contents. * [Tutorial: automate, visualize and reproduce MLPerf training submissions](#tutorial-automate-visualize-and-reproduce-mlperf-training-submissions) * [Install CM](#install-cm) * [Import results to the CK platform](#import-results-to-the-ck-platform) * [Visualize and compare MLPerf training results](#visualize-and-compare-mlperf-training-results) * [Contact MLCommons task force on automation and reproducibility](#contact-mlcommons-task-force-on-automation-and-reproducibility)

Tutorial: automate, visualize and reproduce MLPerf training submissions

The MLCommons task force on automation and reproducibility is developing an open-source Collective Knowledge platform to make it easier for the community to run, visualize and optimize MLPerf benchmarks out of the box across diverse software, hardware, models and data.

This tutorial demonstrates how to run and/or reproduce MLPerf training benchmark with the help of the MLCommons CM automation language.

If you have any questions about this tutorial, please get in touch via our public Discord server or open a GitHub issue here.

Install CM

Follow this guide to install the MLCommons CM automation language on your platfom.

We have tested this tutorial with Ubuntu 20.04 and Windows 10.

To be continued ...

Import results to the CK platform

Follow this guide

Visualize and compare MLPerf training results

You can visualize and compare MLPerf results here. You can use this collaborative platform inside your organization to reproduce and optimize benchmarks and applications of your interest.

Contact MLCommons task force on automation and reproducibility

Please join the MLCommons task force on automation and reproducibility to get free help to automate and optimize MLPerf benchmarks for your software and hardware stack using the MLCommons CM automation language!