logo
CM Script Automation Documentation
Index
Initializing search
    GitHub
    • HOME
    • Getting Started
    • CM Scripts
    GitHub
    • HOME
    • Getting Started
    • CM Scripts
      • Python-automation
      • MLPerf-benchmark-support
      • Modular-AI-ML-application-pipeline
      • Modular-application-pipeline
      • Modular-MLPerf-inference-benchmark-pipeline
      • Modular-MLPerf-benchmarks
      • Reproduce-MLPerf-benchmarks
      • Modular-MLPerf-training-benchmark-pipeline
      • DevOps-automation
      • Docker-automation
      • AI-ML-optimization
      • AI-ML-models
      • CM-automation
      • TinyML-automation
      • Cloud-automation
      • Platform-information
      • Detection-or-installation-of-tools-and-artifacts
      • Compiler-automation
      • CM-Interface
      • Legacy-CK-support
      • AI-ML-datasets
      • CUDA-automation
      • AI-ML-frameworks
      • Reproducibility-and-artifact-evaluation
      • GUI
      • Collective-benchmarking
      • Tests
      • Dashboard-automation
      • Remote-automation
      • CM-interface-prototyping

    Index

    • AI-ML-datasets
    • AI-ML-frameworks
    • AI-ML-models
    • AI-ML-optimization
    • Cloud-automation
    • CM-automation
    • CM-Interface
    • CM-interface-prototyping
    • Collective-benchmarking
    • Compiler-automation
    • CUDA-automation
    • Dashboard-automation
    • Detection-or-installation-of-tools-and-artifacts
    • DevOps-automation
    • Docker-automation
    • GUI
    • Legacy-CK-support
    • MLPerf-benchmark-support
    • Modular-AI-ML-application-pipeline
    • Modular-application-pipeline
    • Modular-MLPerf-benchmarks
    • Modular-MLPerf-inference-benchmark-pipeline
    • Modular-MLPerf-training-benchmark-pipeline
    • Platform-information
    • Python-automation
    • Remote-automation
    • Reproduce-MLPerf-benchmarks
    • Reproducibility-and-artifact-evaluation
    • Tests
    • TinyML-automation
    Made with Material for MkDocs