Text Summarization using LLAMA2-70b¶
Dataset¶
The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
--outdirname=<PATH_TO_DOWNLOAD_OPENORCA_DATASET>could be provided to download the dataset to a specific location.
Model¶
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
Note: One has to accept the MLCommons Llama 2 License Confidentiality Notice to access the model files in MLCOMMONS Storage.
Get the Official MLPerf LLAMA2-70B model from MLCOMMONS Storage¶
mlcr get,ml-model,llama2-70b,_pytorch,_r2-downloader,_70b,_mlc -j
Note: One has to accept the MLCommons Llama 2 License Confidentiality Notice to access the full precision model files in MLCOMMONS storage which are needed for quantization process.
mlcr get,ml-model,llama2-70b,_nvidia,_fp8,_v5.1 -j
- Use
--checkpoint=<Full Precision model path>if model is already downloaded to a specific location.
Note: One has to accept the MLCommons Llama 2 License Confidentiality Notice to access the full precision model files in MLCOMMONS storage which are needed for quantization process.
mlcr get,ml-model,llama2-70b,_nvidia,_fp8,_v5.0 -j
- Use
--checkpoint=<Full Precision model path>if model is already downloaded to a specific location.
Note: One has to accept the MLCommons Llama 2 License Confidentiality Notice to access the full precision model files and pre-quantized model files in MLCOMMONS storage.
mlcr get,ml-model,llama2-70b,_nvidia,_fp8,_v5.0,_pre-quantized -j
- Use
--checkpoint=<Full Precision model path>if full precision model is already downloaded to a specific location.
--outdirname=<PATH_TO_DOWNLOAD_LLAMA2_70B_MODEL>could be provided to download the model to a specific location.