Skip to content

Commit 6021b26

Browse files
committed
readme links
1 parent e0d82af commit 6021b26

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
[arXiv](https://arxiv.org/abs/2012.09841) | [BibTeX](#bibtex) | [Project Page](https://compvis.github.io/taming-transformers/)
1414

1515
### News
16-
- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](). See also [this section](#more-resources).
16+
- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](https://github.com/openai/DALL-E). See also [this section](#more-resources).
1717
- We now include an overview of pretrained models in [Tab.1](#overview-of-pretrained-models)
1818
- The streamlit demo now supports image completions.
1919
- We now include a couple of examples from the D-RIN dataset so you can run the
@@ -31,7 +31,7 @@ conda activate taming
3131
## Overview of pretrained models
3232
The following table provides an overview of all models that are currently available.
3333
FID scores were evaluated using [torch-fidelity](https://github.com/toshas/torch-fidelity) and without rejection sampling.
34-
For reference, we also include a link to the recently released autoencoder of the [DALL-E]() model.
34+
For reference, we also include a link to the recently released autoencoder of the [DALL-E](https://github.com/openai/DALL-E) model.
3535
See the corresponding [colab
3636
notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb)
3737
for a comparison and discussion of reconstruction capabilities.
@@ -51,7 +51,7 @@ for a comparison and discussion of reconstruction capabilities.
5151
| VQGAN ImageNet (f=16), 1024| 8.0 | [vqgan_imagenet_f16_1024](https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs evaluated against the validation split of ImageNet on 256x256 images.
5252
| VQGAN ImageNet (f=16), 16384| 4.9 |[vqgan_imagenet_f16_16384](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs evaluated against the validation split of ImageNet on 256x256 images.
5353
| | | | || |
54-
| DALL-E VQVA (f=8), 8192, GumbelQuantization| 34.3 | https://github.com/openai/DALL-E | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs evaluated against the validation split of ImageNet on 256x256 images.
54+
| DALL-E VQVAE (f=8), 8192, GumbelQuantization| 34.3 | https://github.com/openai/DALL-E | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs evaluated against the validation split of ImageNet on 256x256 images.
5555

5656

5757
## Running pretrained models

0 commit comments

Comments
 (0)