Closed
Description
Describe the bug
Loading the BF16 variant of SANA leads to downloading and loading non-BF16 files.
Reproduction
import torch
from diffusers import SanaPipeline
pipe = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers",
torch_dtype=torch.bfloat16,
variant="bf16",
)
A mixture of bf16 and non-bf16 filenames will be loaded.
Loaded bf16 filenames:
[transformer/diffusion_pytorch_model.bf16.safetensors, text_encoder/model.bf16-00001-of-00002.safetensors, vae/diffusion_pytorch_model.bf16.safetensors, text_encoder/model.bf16-00002-of-00002.safetensors]
Loaded non-bf16 filenames:
[transformer/diffusion_pytorch_model-00002-of-00002.safetensors, transformer/diffusion_pytorch_model-00001-of-00002.safetensors
If this behavior is not expected, please check your folder structure.
Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s]
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Logs
System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Linux-6.1.97.1.fi-x86_64-with-glibc2.28
- Running on Google Colab?: No
- Python version: 3.10.13
- PyTorch version (GPU?): 2.4.1+cu121 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.3
- Transformers version: 4.50.0
- Accelerate version: 1.0.0
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Who can help?
No response