You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running 2 ControlNet models on a 768x768 SD 1.5 model image generation through WebUI, my VRAM usage sits at around 6gb, and then goes up to 8gb when using the hires fix to continue generating at 1420x1420.
When I do the same setup but with PWW enabled, my VRAM usage spikes up to 21gb during the 768x768 generation, and then crashes with an OOM error when trying to use the hires fix at 1420x1420, saying that it tried to allocate 29gb of VRAM (!!!), which is more than my 3090 has.
I don't recall having this same issue when running a generation at 512x512 and then hires fixing at 1024x1024 around a month ago; I'm just going based on memory, but I'm pretty sure it only used 12gb and 16gb respectively. I don't currently have a 512 model to test it on at the moment, but running a 768 model at 512x512 resulted in 14gb of VRAM usage, and a crash with an OOM error trying to allocate 2gb when hires fixed to 1024x1024.
Not sure if this is intentional behavior, but if it is, this would make the extension pretty much useless to anyone with any consumer grade GPU, even the highest end available.
But if it's not intentional behavior, I thought I'd bring it up.
The text was updated successfully, but these errors were encountered:
When running 2 ControlNet models on a 768x768 SD 1.5 model image generation through WebUI, my VRAM usage sits at around 6gb, and then goes up to 8gb when using the hires fix to continue generating at 1420x1420.
When I do the same setup but with PWW enabled, my VRAM usage spikes up to 21gb during the 768x768 generation, and then crashes with an OOM error when trying to use the hires fix at 1420x1420, saying that it tried to allocate 29gb of VRAM (!!!), which is more than my 3090 has.
I don't recall having this same issue when running a generation at 512x512 and then hires fixing at 1024x1024 around a month ago; I'm just going based on memory, but I'm pretty sure it only used 12gb and 16gb respectively. I don't currently have a 512 model to test it on at the moment, but running a 768 model at 512x512 resulted in 14gb of VRAM usage, and a crash with an OOM error trying to allocate 2gb when hires fixed to 1024x1024.
Not sure if this is intentional behavior, but if it is, this would make the extension pretty much useless to anyone with any consumer grade GPU, even the highest end available.
But if it's not intentional behavior, I thought I'd bring it up.
The text was updated successfully, but these errors were encountered: