diff --git a/README.md b/README.md index a4e64fb..6915927 100644 --- a/README.md +++ b/README.md @@ -39,8 +39,11 @@ Pre-quantized models: - [flux1-dev GGUF](https://huggingface.co/city96/FLUX.1-dev-gguf) - [flux1-schnell GGUF](https://huggingface.co/city96/FLUX.1-schnell-gguf) +- [stable-diffusion-3.5-large GGUF](https://huggingface.co/city96/stable-diffusion-3.5-large-gguf) +- [stable-diffusion-3.5-large-turbo GGUF](https://huggingface.co/city96/stable-diffusion-3.5-large-turbo-gguf) Initial support for quantizing T5 has also been added recently, these can be used using the various `*CLIPLoader (gguf)` nodes which can be used inplace of the regular ones. For the CLIP model, use whatever model you were using before for CLIP. The loader can handle both types of files - `gguf` and regular `safetensors`/`bin`. - [t5_v1.1-xxl GGUF](https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf) +See the instructions in the [tools](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) folder for how to create your own quants.