From 6561064dcfb3dfa638e3739506acfd34924e1cc5 Mon Sep 17 00:00:00 2001 From: City <125218114+city96@users.noreply.github.com> Date: Wed, 23 Oct 2024 04:38:39 +0200 Subject: [PATCH] Add SD3.5 links --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index a4e64fb..6915927 100644 --- a/README.md +++ b/README.md @@ -39,8 +39,11 @@ Pre-quantized models: - [flux1-dev GGUF](https://huggingface.co/city96/FLUX.1-dev-gguf) - [flux1-schnell GGUF](https://huggingface.co/city96/FLUX.1-schnell-gguf) +- [stable-diffusion-3.5-large GGUF](https://huggingface.co/city96/stable-diffusion-3.5-large-gguf) +- [stable-diffusion-3.5-large-turbo GGUF](https://huggingface.co/city96/stable-diffusion-3.5-large-turbo-gguf) Initial support for quantizing T5 has also been added recently, these can be used using the various `*CLIPLoader (gguf)` nodes which can be used inplace of the regular ones. For the CLIP model, use whatever model you were using before for CLIP. The loader can handle both types of files - `gguf` and regular `safetensors`/`bin`. - [t5_v1.1-xxl GGUF](https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf) +See the instructions in the [tools](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) folder for how to create your own quants.