getting this error torch.cuda.OutOfMemoryError: CUDA out of memory while running diffusers/examples/dreambooth/train_dreambooth.py #6932
Replies: 2 comments 20 replies
-
can you try adding the |
Beta Was this translation helpful? Give feedback.
-
sorry, I'm actually working right now and try to help when I have time, can't spend that much time on this. You're having conflicts between the installations of torch and torchvision, try doing a if that doesn't work, then something changed in colab between when I did it and now, sadly I can't use colab anymore since I used it for another answer and it doesn't let me run more sessions. When I can connect to it to again I can help more if no else hasn't done it. |
Beta Was this translation helpful? Give feedback.
-
trying following command and getting the error
!accelerate launch diffusers/examples/dreambooth/train_dreambooth.py
![Screenshot 2024-02-11 101517](https://private-user-images.githubusercontent.com/66991234/303898873-840d8a73-3ecb-4035-a6f2-42342e94c820.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1NDM4MzAsIm5iZiI6MTczOTU0MzUzMCwicGF0aCI6Ii82Njk5MTIzNC8zMDM4OTg4NzMtODQwZDhhNzMtM2VjYi00MDM1LWE2ZjItNDIzNDJlOTRjODIwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE0VDE0MzIxMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWIyYWMyMDRmYTEwOTM5MTViZTA4ZGZlOWQ2OTNlZDUwMTJmNGQ0ZmE2ODcxNThhNTZhYTJmYzRjNDRkMTc3MjcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.-z2j9yngGmsUWYWaKACEs6ey4j0BYROY4i-MepW23Z8)
--pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4
--instance_data_dir=diffusers/Dog
--output_dir=diffusers/Dog_SD
--instance_prompt="a photo of dog
"
--resolution=512
--train_batch_size=1
--gradient_accumulation_steps=1
--learning_rate=5e-6
--lr_scheduler="constant"
--lr_warmup_steps=0
--max_train_steps=400
--push_to_hub
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 9.50 GiB total capacity; 1.13 GiB already allocated; 24.94 MiB free; 1.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Beta Was this translation helpful? Give feedback.
All reactions