Skip to content

mlcommons/submissions_algorithms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLCommons™ AlgoPerf: Training Algorithms Leaderboard


MLCommons Logo

This repository hosts the official rolling leaderboard for the AlgoPerf: Training Algorithms benchmark by MLCommons. The benchmark measures neural network training speedups due to algorithmic improvements in training algorithms. The leaderboard tracks the aggregate performance of different algorithms on a variety of workloads and under two different tuning rulesets.

Note

If you want to submit to the AlgoPerf benchmark, please open a PR with your submission. The AlgoPerf working group will review your submission and potentially evaluate your submission on all workloads. For more details, see the How to Submit section.

Live Leaderboards

Leaderboard Version: 0.6
Last Updated: 2025-03-24 15:07 UTC
Using Benchmark Version: latest

Tip

The leaderboard of the first AlgoPerf competition with more entries can be found here.

External Tuning Ruleset Leaderboard

In the external tuning ruleset, submission must provide workload-agnostic hyperparameter search spaces and they will get $5$ tuning trials per workload sampled from this search space.

Rank Submission Authors Affiliation Framework Logs Score
1.
Distributed ShampooBased on the Distributed Shampoo algorithm of Anil et al. (2020) with an implementation tailored to leverage PyTorch performance optimizations. See Shi et al. (2023) for details. The submission uses a list of five hyperparameter settings.
Hao-Jun Shi, Tsung-Hsien Lee, Anna Cai, Shintaro Iwasaki, Wenyin Fu, Yuchen Hao, Mike Rabbat Meta Platforms PyTorch 💾 0.6244
2.
BaselineBaseline using NadamW (Dozat, 2016; Loshchilov & Hutter, 2019) and a linear learning rate warmup followed by a cosine decay (Dahl et al., 2023).
JAX 💾 0.4590

Self-Tuning Ruleset Leaderboard

In the self-tuning ruleset, submissions must be completely hyperparameter-free.

Note

The first self-tuning submissions are currently being scored.

How to Submit

To submit your algorithm for evaluation on the AlgoPerf leaderboard, please follow these steps:

  1. Implement your algorithm in the AlgoPerf API: Have a look at our Getting Started Guide and the Technical Documentation.
  2. Create a Pull Request: Fork this repository, create a new branch and add your submission code to a new folder within either submissions/external_tuning/ or submissions/self_tuning. Open a pull request (PR) to the evaluation branch of this repository. Make sure to fill out the PR template asking for information such as submission name, authors, affiliations, etc.
  3. PR Review and Evaluation: The AlgoPerf working group will review your PR. Based on our available resources and the perceived potential of the method, it will be selected for a free evaluation and merged into the evaluation branch. The working group will run your submission on all workloads and push the results, as well as the updated leaderboard, to the mainbranch.

Citation

If you use the AlgoPerf benchmark in your research, please consider citing our paper.

Dahl, Schneider, Nado, et al.
> Benchmarking Neural Network Training Algorithms
> arXiv 2306.07179

@Misc{Dahl2023AlgoPerf,
  title         = {{Benchmarking Neural Network Training Algorithms}},
  author        = {Dahl, George E. and Schneider, Frank and Nado, Zachary and Agarwal, Naman and Sastry, Chandramouli Shama and Hennig, Philipp and Medapati, Sourabh and Eschenhagen, Runa and Kasimbeg, Priya and Suo, Daniel and Bae, Juhan and Gilmer, Justin and Peirson, Abel L. and Khan, Bilal and Anil, Rohan and Rabbat, Mike and Krishnan, Shankar and Snider, Daniel and Amid, Ehsan and Chen, Kongtao and Maddison, Chris J. and Vasudev, Rakshith and Badura, Michal and Garg, Ankush and Mattson, Peter},
  year          = {2023},
  archiveprefix = {arXiv},
  eprint        = {2306.07179},
}

If you use the results from the first AlgoPerf competition, please consider citing the results paper, as well as the relevant submissions:

Kasimbeg, Schneider, Eschenhagen, et al.
> Accelerating neural network training: An analysis of the AlgoPerf competition
ICLR 2025

@inproceedings{Kasimbeg2025AlgoPerfResults,
title           = {Accelerating neural network training: An analysis of the {AlgoPerf} competition},
author          = {Kasimbeg, Priya and Schneider, Frank and Eschenhagen, Runa and Bae, Juhan and Sastry, Chandramouli Shama and Saroufim, Mark and Boyuan, Feng and Wright, Less and Yang, Edward Z. and Nado, Zachary and Medapati, Sourabh and Hennig, Philipp and Rabbat, Michael and Dahl, George E.},
booktitle       = {The Thirteenth International Conference on Learning Representations},
year            = {2025},
url             = {https://openreview.net/forum?id=CtM5xjRSfm}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages