Skip to content

Commit 049bd91

Browse files
committed
readme
1 parent 04764b4 commit 049bd91

File tree

2 files changed

+12
-2
lines changed

2 files changed

+12
-2
lines changed

README.md

+4
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,10 @@
1515

1616

1717
### News
18+
- Included a bugfix for the quantizer. For backward compatibility it is
19+
disabled by default (which corresponds to always training with `beta=1.0`).
20+
Use `legacy=False` in the quantizer config to enable it.
21+
Thanks [richcmwang](https://github.com/richcmwang) and [wcshin-git](https://github.com/wcshin-git)!
1822
- Our paper received an update: See https://arxiv.org/abs/2012.09841v3 and the corresponding changelog.
1923
- Added a pretrained, [1.4B transformer model](https://k00.fr/s511rwcv) trained for class-conditional ImageNet synthesis, which obtains state-of-the-art FID scores among autoregressive approaches and outperforms BigGAN.
2024
- Added pretrained, unconditional models on [FFHQ](https://k00.fr/yndvfu95) and [CelebA-HQ](https://k00.fr/2xkmielf).

taming/modules/vqvae/quantize.py

+8-2
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,10 @@ class VectorQuantizer(nn.Module):
1818
_____________________________________________
1919
"""
2020

21+
# NOTE: this class contains a bug regarding beta; see VectorQuantizer2 for
22+
# a fix and use legacy=False to apply that fix. VectorQuantizer2 can be
23+
# used wherever VectorQuantizer has been used before and is additionally
24+
# more efficient.
2125
def __init__(self, n_e, e_dim, beta):
2226
super(VectorQuantizer, self).__init__()
2327
self.n_e = n_e
@@ -211,7 +215,9 @@ class VectorQuantizer2(nn.Module):
211215
Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly
212216
avoids costly matrix multiplications and allows for post-hoc remapping of indices.
213217
"""
214-
# TODO: check beta fix, maybe include 'legacy' version?
218+
# NOTE: due to a bug the beta term was applied to the wrong term. for
219+
# backwards compatibility we use the buggy version by default, but you can
220+
# specify legacy=False to fix it.
215221
def __init__(self, n_e, e_dim, beta, remap=None, unknown_index="random",
216222
sane_index_shape=False, legacy=True):
217223
super().__init__()
@@ -320,4 +326,4 @@ def get_codebook_entry(self, indices, shape):
320326
# reshape back to match original input shape
321327
z_q = z_q.permute(0, 3, 1, 2).contiguous()
322328

323-
return z_q
329+
return z_q

0 commit comments

Comments
 (0)