get-tensorrt
Automatically generated README for this automation recipe: get-tensorrt
Category: CUDA automation
License: Apache 2.0
-
Notes from the authors, contributors and users: README-extra
-
CM meta description for this script: _cm.json
- Output cached? True
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "get tensorrt nvidia" --help
Run this script
Run this script via CLI
cm run script --tags=get,tensorrt,nvidia[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "get tensorrt nvidia [variations]" [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'get,tensorrt,nvidia'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "get tensorrt nvidia[variations]" [--input_flags]
Variations
-
No group (any combination of variations can be selected)
Click here to expand this section.
_dev
- ENV variables:
- CM_TENSORRT_REQUIRE_DEV:
yes
- CM_TENSORRT_REQUIRE_DEV:
- ENV variables:
Input Flags
- --input: Full path to the installed TensorRT library (nvinfer)
- --tar_file: Full path to the TensorRT Tar file downloaded from the Nvidia website (https://developer.nvidia.com/tensorrt)
Script flags mapped to environment
--input=value
→CM_INPUT=value
--tar_file=value
→CM_TENSORRT_TAR_FILE_PATH=value
Native script being run
No run file exists for Windows
Script output
cmr "get tensorrt nvidia [variations]" [--input_flags] -j