Skip to content

Commit 13e6230

Browse files
committed
custom dataset training
1 parent f21f6da commit 13e6230

File tree

1 file changed

+5
-3
lines changed

1 file changed

+5
-3
lines changed

README.md

+5-3
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515

1616

1717
### News
18+
- Thanks to [rom1504](https://github.com/rom1504) it is now easy to [train a VQGAN on your own datasets](#training-on-custom-data).
1819
- Included a bugfix for the quantizer. For backward compatibility it is
1920
disabled by default (which corresponds to always training with `beta=1.0`).
2021
Use `legacy=False` in the quantizer config to enable it.
@@ -186,9 +187,10 @@ Training on your own dataset can be beneficial to get better tokens and hence be
186187
Those are the steps to follow to make this work:
187188
1. install the repo with `conda env create -f environment.yaml`, `conda activate taming` and `pip install -e .`
188189
1. put your .jpg files in a folder `your_folder`
189-
2. create 2 text files a xx_train.txt and xx_test.txt that point to the files in your training and test set respectively (for example `find `pwd`/your_folder -name "*.jpg" > train.txt`)
190-
3. adapt configs/custom_vqgan.yaml to point to these 2 files
191-
4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1`
190+
2. create 2 text files a `xx_train.txt` and `xx_test.txt` that point to the files in your training and test set respectively (for example `find $(pwd)/your_folder -name "*.jpg" > train.txt`)
191+
3. adapt `configs/custom_vqgan.yaml` to point to these 2 files
192+
4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1` to
193+
train on two GPUs. Use `--gpus 0,` (with a trailing comma) to train on a single GPU.
192194

193195
## Data Preparation
194196

0 commit comments

Comments
 (0)