You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-3
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,7 @@
15
15
16
16
17
17
### News
18
+
- Thanks to [rom1504](https://github.com/rom1504) it is now easy to [train a VQGAN on your own datasets](#training-on-custom-data).
18
19
- Included a bugfix for the quantizer. For backward compatibility it is
19
20
disabled by default (which corresponds to always training with `beta=1.0`).
20
21
Use `legacy=False` in the quantizer config to enable it.
@@ -186,9 +187,10 @@ Training on your own dataset can be beneficial to get better tokens and hence be
186
187
Those are the steps to follow to make this work:
187
188
1. install the repo with `conda env create -f environment.yaml`, `conda activate taming` and `pip install -e .`
188
189
1. put your .jpg files in a folder `your_folder`
189
-
2. create 2 text files a xx_train.txt and xx_test.txt that point to the files in your training and test set respectively (for example `find `pwd`/your_folder -name "*.jpg" > train.txt`)
190
-
3. adapt configs/custom_vqgan.yaml to point to these 2 files
191
-
4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1`
190
+
2. create 2 text files a `xx_train.txt` and `xx_test.txt` that point to the files in your training and test set respectively (for example `find $(pwd)/your_folder -name "*.jpg" > train.txt`)
191
+
3. adapt `configs/custom_vqgan.yaml` to point to these 2 files
192
+
4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1` to
193
+
train on two GPUs. Use `--gpus 0,` (with a trailing comma) to train on a single GPU.
0 commit comments