get-ml-model-gptj
Automatically generated README for this automation recipe: get-ml-model-gptj
Category: AI/ML models
License: Apache 2.0
- CM meta description for this script: _cm.json
- Output cached? True
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "get raw ml-model gptj gpt-j large-language-model" --help
Run this script
Run this script via CLI
cm run script --tags=get,raw,ml-model,gptj,gpt-j,large-language-model[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "get raw ml-model gptj gpt-j large-language-model [variations]" [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'get,raw,ml-model,gptj,gpt-j,large-language-model'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "get raw ml-model gptj gpt-j large-language-model[variations]" [--input_flags]
Variations
-
No group (any combination of variations can be selected)
Click here to expand this section.
_batch_size.#
- ENV variables:
- CM_ML_MODEL_BATCH_SIZE:
#
- CM_ML_MODEL_BATCH_SIZE:
- ENV variables:
-
Group "download-tool"
Click here to expand this section.
_rclone
(default)- ENV variables:
- CM_DOWNLOAD_FILENAME:
checkpoint
- CM_DOWNLOAD_URL:
<<<CM_RCLONE_URL>>>
- CM_DOWNLOAD_FILENAME:
- ENV variables:
_wget
- ENV variables:
- CM_DOWNLOAD_URL:
<<<CM_PACKAGE_URL>>>
- CM_DOWNLOAD_FILENAME:
checkpoint.zip
- CM_DOWNLOAD_URL:
- ENV variables:
-
Group "framework"
Click here to expand this section.
_pytorch
(default)- ENV variables:
- CM_ML_MODEL_DATA_LAYOUT:
NCHW
- CM_ML_MODEL_FRAMEWORK:
pytorch
- CM_ML_STARTING_WEIGHTS_FILENAME:
<<<CM_PACKAGE_URL>>>
- CM_ML_MODEL_DATA_LAYOUT:
- ENV variables:
_saxml
-
Group "model-provider"
Click here to expand this section.
_intel
_mlcommons
(default)_nvidia
- ENV variables:
- CM_TMP_ML_MODEL_PROVIDER:
nvidia
- CM_TMP_ML_MODEL_PROVIDER:
- ENV variables:
-
Group "precision"
Click here to expand this section.
_fp32
- ENV variables:
- CM_ML_MODEL_INPUT_DATA_TYPES:
fp32
- CM_ML_MODEL_PRECISION:
fp32
- CM_ML_MODEL_WEIGHT_DATA_TYPES:
fp32
- CM_ML_MODEL_INPUT_DATA_TYPES:
- ENV variables:
_fp8
- ENV variables:
- CM_ML_MODEL_INPUT_DATA_TYPES:
fp8
- CM_ML_MODEL_WEIGHT_DATA_TYPES:
fp8
- CM_ML_MODEL_INPUT_DATA_TYPES:
- ENV variables:
_int4
- ENV variables:
- CM_ML_MODEL_INPUT_DATA_TYPES:
int4
- CM_ML_MODEL_WEIGHT_DATA_TYPES:
int4
- CM_ML_MODEL_INPUT_DATA_TYPES:
- ENV variables:
_int8
- ENV variables:
- CM_ML_MODEL_INPUT_DATA_TYPES:
int8
- CM_ML_MODEL_PRECISION:
int8
- CM_ML_MODEL_WEIGHT_DATA_TYPES:
int8
- CM_ML_MODEL_INPUT_DATA_TYPES:
- ENV variables:
_uint8
- ENV variables:
- CM_ML_MODEL_INPUT_DATA_TYPES:
uint8
- CM_ML_MODEL_PRECISION:
uint8
- CM_ML_MODEL_WEIGHT_DATA_TYPES:
uint8
- CM_ML_MODEL_INPUT_DATA_TYPES:
- ENV variables:
Default variations
_mlcommons,_pytorch,_rclone
Script flags mapped to environment
--checkpoint=value
→GPTJ_CHECKPOINT_PATH=value
--download_path=value
→CM_DOWNLOAD_PATH=value
--to=value
→CM_DOWNLOAD_PATH=value
Native script being run
No run file exists for Windows
Script output
cmr "get raw ml-model gptj gpt-j large-language-model [variations]" [--input_flags] -j