Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please add support for GGUF in Lumina2 pipeline #10749

Closed
nitinmukesh opened this issue Feb 8, 2025 · 2 comments
Closed

Please add support for GGUF in Lumina2 pipeline #10749

nitinmukesh opened this issue Feb 8, 2025 · 2 comments

Comments

@nitinmukesh
Copy link

nitinmukesh commented Feb 8, 2025

Is your feature request related to a problem? Please describe.
GGUF is already available, please add support in pipeline
https://huggingface.co/calcuis/lumina-gguf/tree/main

Describe the solution you'd like.

import torch
from diffusers import Lumina2Text2ImgPipeline, Lumina2Transformer2DModel
bfl_repo = "Alpha-VLLM/Lumina-Image-2.0"
dtype = torch.bfloat16
transformer_path = f"https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q8_0.gguf"
transformer = Lumina2Transformer2DModel.from_single_file(
	transformer_path,
	quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
	torch_dtype=dtype,
	config=bfl_repo,
	subfolder="transformer"
)

pipe = Lumina2Text2ImgPipeline.from_pretrained(
	bfl_repo,
	transformer=transformer,
	torch_dtype=dtype,
)
pipe.enable_model_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
inference_params = {
	"prompt": "Portrait of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background shows an industrial revolution cityscape with smoky skies and tall, metal structures",
	"height": 1024,
	"width": 576,
	"guidance_scale": 4.0,
	"num_inference_steps": 30,
	"generator": torch.Generator(device="cpu").manual_seed(0),
}
image = pipe(**inference_params).images[0]
image.save(output_path)

Describe alternatives you've considered.
BnB int4 / int8 works, with GGUF we may achieve further memory reduction.

Additional context.
(venv) C:\aiOWN\diffuser_webui>python lumina2_gguf.py
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\lumina2_gguf.py", line 6, in
transformer = Lumina2Transformer2DModel.from_single_file(
AttributeError: type object 'Lumina2Transformer2DModel' has no attribute 'from_single_file'

@zhuole1025

@nitinmukesh
Copy link
Author

from_single_file support will also help use bf16/fp16
https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/tree/main/split_files

@DN6
Copy link
Collaborator

DN6 commented Feb 12, 2025

Closing since #10781 was merged.

@DN6 DN6 closed this as completed Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants