You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](). See also [this section](#more-resources).
16
+
- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](https://github.com/openai/DALL-E). See also [this section](#more-resources).
17
17
- We now include an overview of pretrained models in [Tab.1](#overview-of-pretrained-models)
18
18
- The streamlit demo now supports image completions.
19
19
- We now include a couple of examples from the D-RIN dataset so you can run the
@@ -31,7 +31,7 @@ conda activate taming
31
31
## Overview of pretrained models
32
32
The following table provides an overview of all models that are currently available.
33
33
FID scores were evaluated using [torch-fidelity](https://github.com/toshas/torch-fidelity) and without rejection sampling.
34
-
For reference, we also include a link to the recently released autoencoder of the [DALL-E]() model.
34
+
For reference, we also include a link to the recently released autoencoder of the [DALL-E](https://github.com/openai/DALL-E) model.
0 commit comments