-
After last update is no more possible training on <8gb? Before update all working and consuming around 6gb, now always is cuda out of memory, in the same settings/ |
Beta Was this translation helpful? Give feedback.
Answered by
Boldor83
Jan 2, 2023
Replies: 1 comment 2 replies
-
yes even lora doens't work anymore :( before it used less than 5 GB. I have 8 GB vram. It tries to allocate the same vram wether i check lora or not. Both times: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 7.14 GiB already allocated; 0 bytes free; 7.28 GiB reserved in total by PyTorch) |
Beta Was this translation helpful? Give feedback.
2 replies
Answer selected by
Nyaster
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
yes even lora doens't work anymore :( before it used less than 5 GB. I have 8 GB vram. It tries to allocate the same vram wether i check lora or not. Both times: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 7.14 GiB already allocated; 0 bytes free; 7.28 GiB reserved in total by PyTorch)