[ Back to index ]
Click here to see the table of contents.
* [Tutorial: automate TinyMLPerf benchmark](#tutorial-automate-tinymlperf-benchmark) * [Install CM automation language](#install-cm-automation-language) * [Install MLCommons CK repository with CM automations](#install-mlcommons-ck-repository-with-cm-automations) * [Install Python virtual environment](#install-python-virtual-environment) * [Install EEMBC Energy Runner repository](#install-eembc-energy-runner-repository) * [Install CIFAR10 for image classification](#install-cifar10-for-image-classification) * [Setup boards](#setup-boards) * [STMicroelectronics NUCLEO-L4R5ZI](#stmicroelectronics-nucleo-l4r5zi) * [Download and run EEMBC Energy Runner](#download-and-run-eembc-energy-runner) * [Build and run TinyMLPerf benchmarks](#build-and-run-tinymlperf-benchmarks) * [Prepare submission](#prepare-submission) * [Visualize and compare results](#visualize-and-compare-results) * [Contact MLCommons task force on automation and reproducibility](#contact-mlcommons-task-force-on-automation-and-reproducibility)Tutorial: automate TinyMLPerf benchmark
The MLCommons task force on automation and reproducibility is developing an open-source Collective Knowledge platform to make it easier for the community to run, visualize and optimize MLPerf benchmarks out of the box across diverse software, hardware, models and data.
This tutorial demonstrates how to automate a common setup for the Tiny MLPerf benchmark and EEMBC Energy runner with the help of the MLCommons CM automation language on Linux or Windows.
If you have any questions about this tutorial, please get in touch via our public Discord server or open a GitHub issue here.
Install CM automation language
Follow this guide to install the MLCommons CM automation language on your platform.
We have tested this tutorial with Ubuntu 20.04 and Windows 10.
Install MLCommons CK repository with CM automations
cm pull repo mlcommons@ck
If you have been using CM and would like to have a clean installation, you can clean CM cache as follows:
cm rm cache -f
Install Python virtual environment
Since EEMBC Energy Runner and TinyMLPerf data sets require many specific Python dependencies, we suggest you to install Python virtual environment using CM as follows:
cm run script "install python-venv" --name=tiny --version_min=3.9
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=tiny"
You can find its location in CM cache as follows:
cm show cache --tags=python-venv
Install EEMBC Energy Runner repository
You can use CM to install the dependencies for the EEMBC Energy Runner and prepare a required directory structure for all TinyMLPerf benchmarks using this CM script:
cm run script "get eembc energy-runner src"
This CM script will download sources and create an eembc
directory
in your $HOME directory on Linux, Windows or MacOs
with partial datasets required by TinyMLPerf.
Install CIFAR10 for image classification
CIFAR10 data set is not included in EEMBC energy-runner GitHub repo and can be generated on your machine using this CM script:
cm run script "get dataset cifar10 _tiny"
This script will download CIFAR10 data set in the Python (TensorFlow) format
together with the TinyMLPerf sources.
It will then generate samples required for EEMBC Energy Runner in the following directory:
$HOME/eembc/runner/benchmarks/ulp-mlperf/datasets/ic01
Setup boards
STMicroelectronics NUCLEO-L4R5ZI
If you run EEMBC Energy runner on Linux, please check that you have this rule
installed in /usr/lib/udev/rules.d
. If not, please copy it there and unplug/replug the board! See related ticket for more details.
Here is the list of CM automations scripts that can be reused in your experiments for this (and other) board:
- Get EEMBC Energy Runner repository
- Get TinyMLPerf repository
- Get CIFAR10 dataset
- Build Tiny Models
- Flash Tiny Models
- Get Zephyr
- Get Zephyr SDK
- Get MictoTVM
- Get CMSIS_5
Download and run EEMBC Energy Runner
Download EEMBC Energy Runner for your platform from this website and run it. Normally, you should be able to see and initialize the connected board as described here.
Build and run TinyMLPerf benchmarks
You can now follow this tutorial to build, flash and run image classification and keyword spotting benchmarks with MicroTVM, Zephyr and CMSIS on NUCLEO-L4R5ZI. It was prepare for the TinyMLPerf v1.1 submission round as a part of this MLCommons community challenge.
You can then follow the official README to run benchmarks in performance, accuracy and energy modes.
Prepare submission
We plan to automate TinyMLPerf submission for any hardware/software stack during the next submission round.
Visualize and compare results
Please follow this README to import TinyMLPerf results (public or private) to the CM format to visualize and compare them on your local machine while adding derived metrics and providing constraints as shown in the following example:
We publish all public TinyMLPerf results in the MLCommons CK platform to help the community analyze, compare, reproduce, reuse and improve these results.
The ultimate goal of our MLCommons task force and the free MLCommons CK platform is to help users automatically generate Pareto-efficient end-to-end applications using MLPerf results based on their requirements and constraints (performance, accuracy, energy, hardware/software stack, costs).
Contact MLCommons task force on automation and reproducibility
Please join the MLCommons task force on automation and reproducibility to get free help to automate and optimize MLPerf benchmarks for your software and hardware stack using the MLCommons CM automation language!