We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is your feature request related to a problem? Please describe. GGUF is already available, please add support in pipeline https://huggingface.co/calcuis/lumina-gguf/tree/main
Describe the solution you'd like.
import torch from diffusers import Lumina2Text2ImgPipeline, Lumina2Transformer2DModel bfl_repo = "Alpha-VLLM/Lumina-Image-2.0" dtype = torch.bfloat16 transformer_path = f"https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q8_0.gguf" transformer = Lumina2Transformer2DModel.from_single_file( transformer_path, quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=dtype, config=bfl_repo, subfolder="transformer" ) pipe = Lumina2Text2ImgPipeline.from_pretrained( bfl_repo, transformer=transformer, torch_dtype=dtype, ) pipe.enable_model_cpu_offload() pipe.vae.enable_slicing() pipe.vae.enable_tiling() inference_params = { "prompt": "Portrait of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background shows an industrial revolution cityscape with smoky skies and tall, metal structures", "height": 1024, "width": 576, "guidance_scale": 4.0, "num_inference_steps": 30, "generator": torch.Generator(device="cpu").manual_seed(0), } image = pipe(**inference_params).images[0] image.save(output_path)
Describe alternatives you've considered. BnB int4 / int8 works, with GGUF we may achieve further memory reduction.
Additional context. (venv) C:\aiOWN\diffuser_webui>python lumina2_gguf.py Traceback (most recent call last): File "C:\aiOWN\diffuser_webui\lumina2_gguf.py", line 6, in transformer = Lumina2Transformer2DModel.from_single_file( AttributeError: type object 'Lumina2Transformer2DModel' has no attribute 'from_single_file'
@zhuole1025
The text was updated successfully, but these errors were encountered:
from_single_file support will also help use bf16/fp16 https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/tree/main/split_files
Sorry, something went wrong.
Closing since #10781 was merged.
No branches or pull requests
Is your feature request related to a problem? Please describe.
GGUF is already available, please add support in pipeline
https://huggingface.co/calcuis/lumina-gguf/tree/main
Describe the solution you'd like.
Describe alternatives you've considered.
BnB int4 / int8 works, with GGUF we may achieve further memory reduction.
Additional context.
(venv) C:\aiOWN\diffuser_webui>python lumina2_gguf.py
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\lumina2_gguf.py", line 6, in
transformer = Lumina2Transformer2DModel.from_single_file(
AttributeError: type object 'Lumina2Transformer2DModel' has no attribute 'from_single_file'
@zhuole1025
The text was updated successfully, but these errors were encountered: