-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert FLUX.1-Depth/Canny/Fill-dev.safetensors to Q8? #189
Comments
@city96 Could you help me about this ? thanks! |
search youtube video, there is a colab to do so. But I do not know how to do that. |
can you share the video url ? I don't know search what key words |
There are conversions available on huggingface. Here for example is quantized FLUX.1-Fill-dev: FLUX.1-Fill-dev.gguf itself is supported by the ComfyUI-GGUF nodes The problem I have is that FLUX.1 Dev LoRAs are not working with FLUX.1-Fill-dev.gguf. This is the error I get: Are the LoRAs even supposed to work with FLUX.1-Fill-dev ? |
Hi, blackforest just released their strong controlnet model. But It is too big sample as flux.1 dec fp16 (24GB).
could you please how to convert the FLUX.1-Depth/Canny/Fill-dev.safetensors to Q8 and can you support it ? Thanks!
The text was updated successfully, but these errors were encountered: