-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
macOS Sequoia 15.1.1 with MPS and PyTorch (BFloat16 Unsupported) #175
Comments
Hmmm, the actual weights for the model you linked are all in different quantized formats (the old flux ones use FP16 for the mixed weights because they're from before I had added BF16 support). I believe FP16 might cause NaN issues with flux, which is why the forward pass is by default cast to bf16 in comfy (and it sort of looks like the fp16 flag isn't respected for flux for some reason). Could you see if |
me too |
% python main.py --force-fp32 ComfyUI-Manager: installing dependencies done.** ComfyUI startup time: 2024-12-11 21:30:51.125370 Prestartup times for custom nodes: Total VRAM 32768 MB, total RAM 32768 MB Loading: ComfyUI-Impact-Pack (V7.14)Loading: ComfyUI-Impact-Pack (Subpack: V0.8)[Impact Pack] Wildcards loading done. Loading: ComfyUI-Manager (V2.55)ComfyUI Version: v0.3.6-17-g8af9a91 | Released on '2024-12-06'[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
|
if used PYTORCH_ENABLE_MPS_FALLBACK=1 then it work on CPU but very slow % PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --force-fp32 Requested to load FluxClipModel_ |
That error isn't about BFloat16 though, it's about rshift, which is a completely different issue. Based on this comment the rshift thing should work on latest pytorch: #27 (comment) You're using pytorch 2.2.2 which is probably why it's failing. |
please check this issue
comfyanonymous/ComfyUI#5829
I think the issue is due to the model weight being in bfloat16, and even after converting it, it still detects the weight as bfloat16.
https://github.com/city96/ComfyUI-GGUF/tree/main/tools
I used all the models i see bfloat16 error,
https://huggingface.co/city96/FLUX.1-schnell-gguf/tree/main
is it possible to convert for intel mac AMD
This is not working
% PYTORCH_ENABLE_MPS_FALLBACK=1 PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 python main.py --use-split-cross-attention --force-fp16
Processor:
Model: Intel(R) Xeon(R) W-2140B CPU
Speed: 3.20 GHz
Cores: 8 cores (with Hyper-Threading enabled)
Memory (RAM):
Total RAM: 32 GB (34359738368 bytes)
GPU:
Model: Radeon Pro Vega 56
VRAM: 8 GB
Bus: PCIe (x16)
Metal Support: Metal 3 (supports Apple's graphics framework)
The text was updated successfully, but these errors were encountered: