benchmark-any-mlperf-inference-implementation
Automatically generated README for this automation recipe: benchmark-any-mlperf-inference-implementation
Category: MLPerf benchmark support
License: Apache 2.0
- CM meta description for this script: _cm.yaml
- Output cached? False
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models" --help
Run this script
Run this script via CLI
cm run script --tags=benchmark,run,natively,all,inference,any,mlperf,mlperf-implementation,implementation,mlperf-models[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models [variations]" [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'benchmark,run,natively,all,inference,any,mlperf,mlperf-implementation,implementation,mlperf-models'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models[variations]" [--input_flags]
Variations
-
Group "implementation"
Click here to expand this section.
_deepsparse- ENV variables:
- DIVISION:
open - IMPLEMENTATION:
deepsparse
- DIVISION:
- ENV variables:
_intel- ENV variables:
- IMPLEMENTATION:
intel
- IMPLEMENTATION:
- ENV variables:
_mil- ENV variables:
- IMPLEMENTATION:
mil
- IMPLEMENTATION:
- ENV variables:
_nvidia- ENV variables:
- IMPLEMENTATION:
nvidia-original
- IMPLEMENTATION:
- ENV variables:
_qualcomm- ENV variables:
- IMPLEMENTATION:
qualcomm
- IMPLEMENTATION:
- ENV variables:
_reference- ENV variables:
- IMPLEMENTATION:
reference
- IMPLEMENTATION:
- ENV variables:
_tflite-cpp- ENV variables:
- IMPLEMENTATION:
tflite_cpp
- IMPLEMENTATION:
- ENV variables:
-
Group "power"
Click here to expand this section.
_performance-only(default)_power- ENV variables:
- POWER:
True
- POWER:
- ENV variables:
-
Group "sut"
Click here to expand this section.
_aws-dl2q.24xlarge_macbookpro-m1- ENV variables:
- CATEGORY:
edge - DIVISION:
closed
- CATEGORY:
- ENV variables:
_mini_orin_orin.32g- ENV variables:
- CATEGORY:
edge - DIVISION:
closed
- CATEGORY:
- ENV variables:
_phoenix- ENV variables:
- CATEGORY:
edge - DIVISION:
closed
- CATEGORY:
- ENV variables:
_rb6_rpi4_sapphire-rapids.24c- ENV variables:
- CATEGORY:
edge - DIVISION:
closed
- CATEGORY:
- ENV variables:
Default variations
_performance-only
Script flags mapped to environment
--backends=value→BACKENDS=value--category=value→CATEGORY=value--devices=value→DEVICES=value--division=value→DIVISION=value--extra_args=value→EXTRA_ARGS=value--models=value→MODELS=value--power_server=value→POWER_SERVER=value--power_server_port=value→POWER_SERVER_PORT=value
Default environment
These keys can be updated via --env.KEY=VALUE or env dictionary in @input.json or using script flags.
- DIVISION:
open - CATEGORY:
edge
Native script being run
No run file exists for Windows
Script output
cmr "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models [variations]" [--input_flags] -j