You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/GETTING_STARTED.md
+5-48Lines changed: 5 additions & 48 deletions
Original file line number
Diff line number
Diff line change
@@ -18,8 +18,7 @@
18
18
-[Docker Tips](#docker-tips)
19
19
-[Score your Submission](#score-your-submission)
20
20
-[Running workloads](#running-workloads)
21
-
-[Package your Submission code](#package-your-submission-code)
22
-
-[Package Logs for Self-Reporting Submissions](#package-logs-for-self-reporting-submissions)
21
+
-[Submit your Submission](#submit-your-submission)
23
22
24
23
## Set Up and Installation
25
24
@@ -31,7 +30,7 @@ The specs on the benchmarking machines are:
31
30
- 8xV100 16GB GPUs
32
31
- 240 GB in RAM
33
32
- 2 TB in storage (for datasets).
34
-
3. Install the algorithmic package and dependencies either in a [Python virtual environment](#python-virtual-environment) or use a [Docker](#docker) (recommended) or [Singularity/Apptainer container](#using-singularityapptainer-instead-of-docker).
33
+
3. Install the `algoperf` package and dependencies either in a [Python virtual environment](#python-virtual-environment) or use a [Docker](#docker) (recommended) or [Singularity/Apptainer container](#using-singularityapptainer-instead-of-docker).
We provide the scores and performance profiles forthe [paper baseline algorithms](/reference_algorithms/paper_baselines/)in the "Baseline Results" section in [Benchmarking Neural Network Training Algorithms](https://arxiv.org/abs/2306.07179).
391
390
392
-
## Package your Submission code
391
+
## Submit your Submission
393
392
394
-
If you have registered for the AlgoPerf competition you will receive
395
-
an email on 3/27/2024 with a link to a UI to upload a compressed submission folder.
396
-
397
-
To package your submission modules please make sure your submission folder is structured as follows:
398
-
399
-
```bash
400
-
submission_folder/
401
-
├── external_tuning
402
-
│ ├── algorithm_name
403
-
│ │ ├── helper_module.py
404
-
│ │ ├── requirements.txt
405
-
│ │ ├── submission.py
406
-
│ │ └── tuning_search_space.json
407
-
│ └── other_algorithm_name
408
-
│ ├── requirements.txt
409
-
│ ├── submission.py
410
-
│ └── tuning_search_space.json
411
-
└── self_tuning
412
-
└── algorithm_name
413
-
├── requirements.txt
414
-
└── submission.py
415
-
```
416
-
417
-
Specifically we require that:
418
-
419
-
1. There exist subdirectories in the the submission folder named after the ruleset: `external_tuning` or `self_tuning`.
420
-
2. The ruleset subdirectories contain directories named according to
421
-
some identifier of the algorithm.
422
-
3. Each algorithm subdirectory contains a `submission.py` module. Additional helper modules are allowed if prefer to you organize your code into multiple files. If there are additional python packages that have to be installed forthe algorithm also include a `requirements.txt` with package names and versionsin the algorithm subdirectory.
423
-
4. For `external_tuning` algorithms the algorithm subdirectory
424
-
should contain a `tuning_search_space.json`.
425
-
426
-
To check that your submission folder meets the above requirements you can run the `submissions/repo_checker.py` script.
427
-
428
-
## Package Logs for Self-Reporting Submissions
429
-
430
-
To prepare your submission for self reporting run:
The destination directiory will contain the logs packed in studies and trials required for self-reporting.
393
+
To submit your submission, please create a PR on the [submission repository](https://github.com/mlcommons/submissions_algorithms). You can find more details in the submission repositories [How to Submit](https://github.com/mlcommons/submissions_algorithms?tab=readme-ov-file#how-to-submit) section. The working group will review your PR and selectthe most promising submissions for scoring.
0 commit comments