Open
Description
Describe the bug
It is pretty wired that even after I set lora rank to only 2, training the dreambooth_flux with lora still returns OOM. My GPU is three L40s (44G-48G). And data is only one image.
If this is still OOM, I'm wondering why should lora version exist? The next GPU memory capbility is 80G which can train it without lora.
Reproduction
NA
Logs
No response
System Info
L40s
Who can help?
No response