You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,10 @@
15
15
16
16
17
17
### News
18
+
- Included a bugfix for the quantizer. For backward compatibility it is
19
+
disabled by default (which corresponds to always training with `beta=1.0`).
20
+
Use `legacy=False` in the quantizer config to enable it.
21
+
Thanks [richcmwang](https://github.com/richcmwang) and [wcshin-git](https://github.com/wcshin-git)!
18
22
- Our paper received an update: See https://arxiv.org/abs/2012.09841v3 and the corresponding changelog.
19
23
- Added a pretrained, [1.4B transformer model](https://k00.fr/s511rwcv) trained for class-conditional ImageNet synthesis, which obtains state-of-the-art FID scores among autoregressive approaches and outperforms BigGAN.
20
24
- Added pretrained, unconditional models on [FFHQ](https://k00.fr/yndvfu95) and [CelebA-HQ](https://k00.fr/2xkmielf).
0 commit comments