Skip to content

Commit f2a7ad0

Browse files
fedebotucbhuaLTluttmannLeaveson
committed
[Feat] Release!
Co-authored-by: Chuanbo Hua <[email protected]> Co-authored-by: Laurin Luttmann <[email protected]> Co-authored-by: Jiwoo Son <[email protected]>
0 parents  commit f2a7ad0

File tree

106 files changed

+10621
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

106 files changed

+10621
-0
lines changed

.gitignore

+177
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,177 @@
1+
# data and log
2+
.data/
3+
lightning_logs/
4+
*.npz
5+
logs/
6+
outputs/
7+
/data/
8+
/notebooks/data/
9+
10+
11+
#cache
12+
cache/
13+
14+
15+
# Byte-compiled / optimized / DLL files
16+
__pycache__/
17+
*.py[cod]
18+
*$py.class
19+
20+
# C extensions
21+
*.so
22+
23+
# Distribution / packaging
24+
.Python
25+
build/
26+
develop-eggs/
27+
dist/
28+
downloads/
29+
eggs/
30+
.eggs/
31+
lib/
32+
lib64/
33+
parts/
34+
sdist/
35+
var/
36+
wheels/
37+
share/python-wheels/
38+
*.egg-info/
39+
.installed.cfg
40+
*.egg
41+
MANIFEST
42+
43+
# PyInstaller
44+
# Usually these files are written by a python script from a template
45+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
46+
*.manifest
47+
*.spec
48+
49+
# Installer logs
50+
pip-log.txt
51+
pip-delete-this-directory.txt
52+
53+
# Unit test / coverage reports
54+
htmlcov/
55+
.tox/
56+
.nox/
57+
.coverage
58+
.coverage.*
59+
.cache
60+
nosetests.xml
61+
coverage.xml
62+
*.cover
63+
*.py,cover
64+
.hypothesis/
65+
.pytest_cache/
66+
cover/
67+
68+
# Translations
69+
*.mo
70+
*.pot
71+
72+
# Django stuff:
73+
*.log
74+
local_settings.py
75+
db.sqlite3
76+
db.sqlite3-journal
77+
78+
# Flask stuff:
79+
instance/
80+
.webassets-cache
81+
82+
# Scrapy stuff:
83+
.scrapy
84+
85+
# Sphinx documentation
86+
docs/_build/
87+
88+
# PyBuilder
89+
.pybuilder/
90+
target/
91+
92+
# Jupyter Notebook
93+
.ipynb_checkpoints
94+
95+
# IPython
96+
profile_default/
97+
ipython_config.py
98+
99+
# pyenv
100+
# For a library or package, you might want to ignore these files since the code is
101+
# intended to run in multiple environments; otherwise, check them in:
102+
# .python-version
103+
104+
# pipenv
105+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
106+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
107+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
108+
# install all needed dependencies.
109+
#Pipfile.lock
110+
111+
# poetry
112+
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
113+
# This is especially recommended for binary packages to ensure reproducibility, and is more
114+
# commonly ignored for libraries.
115+
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
116+
#poetry.lock
117+
118+
# pdm
119+
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
120+
#pdm.lock
121+
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
122+
# in version control.
123+
# https://pdm.fming.dev/#use-with-ide
124+
.pdm.toml
125+
126+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
127+
__pypackages__/
128+
129+
# Celery stuff
130+
celerybeat-schedule
131+
celerybeat.pid
132+
133+
# SageMath parsed files
134+
*.sage.py
135+
136+
# Environments
137+
.env
138+
.venv
139+
/env
140+
venv/
141+
ENV/
142+
env.bak/
143+
venv.bak/
144+
145+
# Spyder project settings
146+
.spyderproject
147+
.spyproject
148+
149+
# Rope project settings
150+
.ropeproject
151+
152+
# mkdocs documentation
153+
/site
154+
155+
# mypy
156+
.mypy_cache/
157+
.dmypy.json
158+
dmypy.json
159+
160+
# Pyre type checker
161+
.pyre/
162+
163+
# pytype static type analyzer
164+
.pytype/
165+
166+
# Cython debug symbols
167+
cython_debug/
168+
169+
# PyCharm
170+
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
171+
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
172+
# and can be added to the global gitignore or merged into this file. For a more nuclear
173+
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
174+
.idea/
175+
176+
# VSCode debug launch file
177+
.vscode/

.pre-commit-config.yaml

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
fail_fast: true
2+
3+
repos:
4+
5+
- repo: https://github.com/psf/black
6+
rev: 23.3.0
7+
hooks:
8+
- id: black
9+
args: [--config, pyproject.toml]
10+
types: [python]
11+
12+
- repo: https://github.com/charliermarsh/ruff-pre-commit
13+
rev: "v0.0.272"
14+
hooks:
15+
- id: ruff
16+
args: [--fix, --exit-non-zero-on-fix]
17+
18+
- repo: https://github.com/pre-commit/pre-commit-hooks
19+
rev: v4.4.0
20+
hooks:
21+
- id: check-toml
22+
id: check-yaml
23+
id: detect-private-key
24+
id: end-of-file-fixer
25+
id: trailing-whitespace

LICENSE

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
The MIT License (MIT)
2+
3+
Copyright (c) 2024 AI4CO
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in
13+
all copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21+
THE SOFTWARE.

README.md

+106
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# PARCO
2+
3+
[![arXiv](https://img.shields.io/badge/arXiv-2409.03811-b31b1b.svg)](https://arxiv.org/abs/2409.03811) [![Slack](https://img.shields.io/badge/slack-chat-611f69.svg?logo=slack)](https://join.slack.com/t/rl4co/shared_invite/zt-1ytz2c1v4-0IkQ8NQH4TRXIX8PrRmDhQ)
4+
[![License: MIT](https://img.shields.io/badge/License-MIT-red.svg)](https://opensource.org/licenses/MIT)
5+
6+
Code repository for "PARCO: Learning Parallel Autoregressive Policies for Efficient Multi-Agent Combinatorial Optimization"
7+
8+
9+
<div align="center">
10+
<img src="assets/ar-vs-par.png" style="width: 100%; height: auto;">
11+
<i> Autoregressive policy (AR) and Parallel Autoregressive (PAR) decoding </i>
12+
</div>
13+
14+
<br>
15+
16+
<div align="center">
17+
<img src="assets/parco-model.png" style="width: 100%; height: auto;">
18+
<i> PARCO Model</i>
19+
</div>
20+
21+
22+
## 🚀 Usage
23+
24+
### Installation
25+
26+
```bash
27+
pip install -e .
28+
```
29+
30+
Note: we recommend using a virtual environment. Using Conda:
31+
32+
```bash
33+
conda create -n parco
34+
conda activate parco
35+
```
36+
37+
### Data generation
38+
You can generate data using the `generate_data.py`, which will automatically generate all the data we use for training and testing:
39+
40+
```bash
41+
python generate_data.py
42+
```
43+
44+
### Quickstart Notebooks
45+
We made examples for each problem that can be trained under two minutes on consumer hardware. You can find them in the `examples/` folder:
46+
47+
- [1.quickstart-hcvrp.ipynb](examples/1.quickstart-hcvrp.ipynb): HCVRP (Heterogeneous Capacitated Vehicle Routing Problem)
48+
- [2.quickstart-omdcpdp.ipynb](examples/2.quickstart-omdcpdp.ipynb): OMDCPDP (Open Multi-Depot Capacitated Pickup and Delivery Problem)
49+
- [3.quickstart-ffsp.ipynb](examples/3.quickstart-ffsp.ipynb): FFSP (Flexible Flow Shop Scheduling Problem)
50+
51+
52+
### Train your own model
53+
You can train your own model using the `train.py` script. For example, to train a model for the HCVRP problem, you can run:
54+
55+
```bash
56+
python train.py experiment=hcvrp
57+
```
58+
59+
you can change the `experiment` parameter to `omdcpdp` or `ffsp` to train the model for the OMDCPDP or FFSP problem, respectively.
60+
61+
62+
Note on legacy FFSP code: the initial version we made was not yet integrated in RL4CO, so we left it the [`parco/tasks/ffsp_old`](parco/tasks/ffsp_old/README.md) folder, so you can still use it.
63+
64+
65+
### Testing
66+
67+
You may run the `test.py` script to evaluate the model, e.g. with:
68+
69+
```bash
70+
python test.py --problem hcvrp --decode_type greedy --batch_size 128 --sample_size 1
71+
```
72+
73+
74+
## 🤩 Citation
75+
76+
If you find PARCO valuable for your research or applied projects:
77+
78+
```bibtex
79+
@article{berto2024parco,
80+
title={{PARCO: Learning Parallel Autoregressive Policies for Efficient Multi-Agent Combinatorial Optimization}},
81+
author={Federico Berto and Chuanbo Hua and Laurin Luttmann and Jiwoo Son and Junyoung Park and Kyuree Ahn and Changhyun Kwon and Lin Xie and Jinkyoo Park},
82+
year={2024},
83+
journal={arXiv preprint arXiv:2409.03811},
84+
note={\url{https://github.com/ai4co/parco}}
85+
}
86+
```
87+
88+
We will also be happy if you cite the RL4CO framework that we used to create PARCO:
89+
90+
```bibtex
91+
@article{berto2024rl4co,
92+
title={{RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark}},
93+
author={Federico Berto and Chuanbo Hua and Junyoung Park and Laurin Luttmann and Yining Ma and Fanchen Bu and Jiarui Wang and Haoran Ye and Minsu Kim and Sanghyeok Choi and Nayeli Gast Zepeda and Andr\'e Hottung and Jianan Zhou and Jieyi Bi and Yu Hu and Fei Liu and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Davide Angioni and Wouter Kool and Zhiguang Cao and Jie Zhang and Kijung Shin and Cathy Wu and Sungsoo Ahn and Guojie Song and Changhyun Kwon and Lin Xie and Jinkyoo Park},
94+
year={2024},
95+
journal={arXiv preprint arXiv:2306.17100},
96+
note={\url{https://github.com/ai4co/rl4co}}
97+
}
98+
```
99+
100+
---
101+
102+
<div align="center">
103+
<a href="https://github.com/ai4co">
104+
<img src="https://raw.githubusercontent.com/ai4co/assets/main/svg/ai4co_animated_full.svg" alt="AI4CO Logo" style="width: 30%; height: auto;">
105+
</a>
106+
</div>

assets/ar-vs-par.png

175 KB
Loading

assets/parco-model.png

119 KB
Loading

checkpoints/hcvrp/parco.ckpt

7.41 MB
Binary file not shown.

checkpoints/omdcpdp/parco.ckpt

3.79 MB
Binary file not shown.

configs/__init__.py

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# this file is needed here to include configs when building project as a package

configs/callbacks/default.yaml

+19
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
defaults:
2+
- model_checkpoint.yaml
3+
- model_summary.yaml
4+
- rich_progress_bar.yaml
5+
- speed_monitor.yaml
6+
- learning_rate_monitor.yaml
7+
- _self_
8+
9+
model_checkpoint:
10+
dirpath: ${paths.output_dir}/checkpoints
11+
filename: "epoch_{epoch:03d}"
12+
monitor: "val/reward"
13+
mode: "max"
14+
save_last: True
15+
auto_insert_metric_name: False
16+
save_top_k: 1 # set to -1 to save all checkpoints
17+
18+
model_summary:
19+
max_depth: 5 # change to -1 to show all. 5 strikes a good balance between readability and completeness

configs/callbacks/early_stopping.yaml

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# https://pytorch-lightning.readthedocs.io/en/latest/api/lightning.callbacks.EarlyStopping.html
2+
3+
# Monitor a metric and stop training when it stops improving.
4+
# Look at the above link for more detailed information.
5+
early_stopping:
6+
_target_: lightning.pytorch.callbacks.EarlyStopping
7+
monitor: ??? # quantity to be monitored, must be specified !!!
8+
min_delta: 0. # minimum change in the monitored quantity to qualify as an improvement
9+
patience: 3 # number of checks with no improvement after which training will be stopped
10+
verbose: False # verbosity mode
11+
mode: "min" # "max" means higher metric value is better, can be also "min"
12+
strict: True # whether to crash the training if monitor is not found in the validation metrics
13+
check_finite: True # when set True, stops training when the monitor becomes NaN or infinite
14+
stopping_threshold: null # stop training immediately once the monitored quantity reaches this threshold
15+
divergence_threshold: null # stop training as soon as the monitored quantity becomes worse than this threshold
16+
check_on_train_epoch_end: null # whether to run early stopping at the end of the training epoch
17+
# log_rank_zero_only: False # this keyword argument isn't available in stable version
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
learning_rate_monitor:
2+
_target_: lightning.pytorch.callbacks.LearningRateMonitor
3+
logging_interval: epoch

0 commit comments

Comments
 (0)