CK/CM/CMX news
[ Back to index ]
News from the MLCommons Task Force on Automation and Reproducibility
202406
- We published a white paper about the Collective Knowledge Playground, Collective Mind, MLPerf and CM4MLOps: https://arxiv.org/abs/2406.16791
202403
- cKnowledge has completed a collaborative engineering project with MLCommons to enhance CM workflow automation to run MLPerf inference benchmarks across different models, software and hardware from different vendors in a unified way: GUI.
- cTuning has validated the new MLCommons CM workflow to automate ~90% of all MLPerf inference v4.0 performance and power submissions while finding some top performance and cost-effective software/hardware configurations for AI systems: report.
- We presented a new project to "Automatically Compose High-Performance and Cost-Efficient AI Systems with MLCommons' Collective Mind and MLPerf" at the MLPerf-Bench workshop @HPCA'24.
202311
-
ACM/IEEE MICRO'23 used CM to automate artifact evaluation and make it easier for research to understand, prepare, run and reproduce research projects from published papers.
-
The ACM YouTube channel has released the ACM REP'23 keynote about the MLCommons CM automation language and CK playground: toward a common language to facilitate reproducible research and technology transfer.
-
Grigori Fursin and Arjun Suresh served as MLCommons liasons at the Student Cluster Competition at SuperComputing'23 helping the community run, optimize and enhance MLPerf inference benchmarks using the MLCommons CM workflow automation language and CK playground.
202310
-
Grigori Fursin gave an invited talk at AVCC'23 about our MLCommons CM automation language and how it can help to develop modular, portable and technology-agnostic benchmarks.
-
Grigori Fursin gave an IISWC'23 tutorial about our CM workflow automation language and how it can make it easier for researchers to reproduce their projects and validate in the real world across rapidly evolving software and hardware.
202309
- The Collective Knowledge Technology v3 with the open-source MLCommons CM automation language, CK playground and C++ Modular Inference Library helped the community automate > 90% of all MLPerf inference v3.1 results and cross 10000 submissions in one round for the first time (submitted via cTuning foundation)! Here is the list of the new CM/CK capabilities available to everyone to prepare and automate their future MLPerf submissions - please check this HPC Wire article about cTuning's community submission and don't hesitate to reach us via Discord server for more info!*
202309
Our CK playground was featured at the AI hardware summit'23
202307
The overview of the MedPerf project was published in Nature: Federated benchmarking of medical artificial intelligence with MedPerf!
202306
We were honored to give a keynote about our MLCommons automation and reproducibility language to faciliate reproducible experiments and bridge the growing gap between research and production at the 1st ACM conference for Reproducibility and Replicability.
202305
Following the successful validation of our CK/CM technology by the community to automate MLPerf inference v3.0 submissions, the MLCommons Task Force on automation and reproducibilty have prepared a presentation about our development plans for the MLCommons CK playground and MLCommons CM scripting language for Q3 2023.
Our current mission is to prepare new optimization challenges to help companies, students, researchers and practitioners reproduce and optimize MLPerf inference v3.0 results and/or submit new/better results to MLPerf inference v3.1 across diverse models, software and hardware as a community effort.
202304
We have successfully validated the MLCommons CK and CM technology to automate ~80% of MLPerf inference v3.0 submissions (98% of all power results).
MLCommons CK and CM has helped to automatically interconnect very diverse technology from Neural Magic, Qualcomm, Krai, cKnowledge, OctoML, Deelvin, DELL, HPE, Lenovo, Hugging Face, Nvidia and Apple and run it across diverse CPUs, GPUs and DSPs with PyTorch, ONNX, QAIC, TF/TFLite, TVM and TensorRT using popular cloud providers (GCP, AWS, Azure) and individual servers and edge devices via our recent open optimization challenge.
- Forbes article highlighting our MLCommons CK technology
- ZDNet article
- LinkedIn article from Grigori Fursin (MLCommons Task Force co-chair)
- Linkedin article from Arjun Suresh (MLCommons Task Force co-chair)
202304
We pre-released a free, open-source and technology-agnostic Collective Knowledge Playground (MLCommon CK) to automate benchmarking, optimization and reproducibility of MLperf inference benchmark via collaborative challenges!
202302
New GUI to visualize all MLPerf results is available here.
202301
New GUI to run MLPerf inference is available here.
202212
We have added GitHub actions to the MLPerf inference repo to automatically test MLPerf inference benchmark with different models, data sets and frameworks using our customizable MLCommons CM-MLPerf workflows:
202211
Grigori Fursin and Arjun Suresh successfully validated the prototype of their new workflow automation langugage (MLCommons CM) at the Student Cluster Competition at SuperComputing'22. It was used to make it easier to prepare and run the MLPerf inference benchmark just under 1 hour! Please test it using this CM tutorial.
202210
We have prototyped modular CM-MLPerf containers using our portable MLCommons CM scripting language.
202209
We have prepared a presentation about the mission of the MLCommons Task Force on automation and reproducibility.
202308
We have prototyped universal MLPerf inference workflows using the MLCommons CM scripting language.
202307
Grigori Fursin and Arjun Suresh have established an MLCommons Task Force on automation and reproducibility to continue developing MLCommons CK/CM as a community effort.
202306
We have pre-released stable and portable automation CM scripts to unify MLOps and DevOps across diverse software, hardware, models and data.
202305
We have prepared an example of portable and modular image classification using the MLCommons CM scriping language.
202203
Following positive feedback from the community about our Collective Knowledge concept to facilitate reproducible research and technology transfer across rapidly evolving models, software, hardware and data, we have started developing its simplified version as a common scripting language to connect academia and industry: Collective Mind framework (MLCommons CM aka CK2).