Replies: 2 comments 1 reply
-
Could you try Kaggle's notebook? It gives more RAM. |
Beta Was this translation helpful? Give feedback.
-
It's unfortunately in the territory of being memory-bound. There are limited options here without making the library overly bloated. You could consider serializing the components of the pipeline in isolation. This means getting the components like scheduler, UNet, text encoders, VAE serialized one at a time from the single-file checkpoint. For this, you will have to refer to the https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/single_file_utils.py script and repurpose it. The benefit of this will be that only a single component will ever be present in memory instead of all the components. Ccing @DN6 too in case of any other ideas. |
Beta Was this translation helpful? Give feedback.
-
Can I lower the RAM usage when I'm converting XL models? Colab's free plan don't offer enough RAM for this kind of tasks. I'm using this code to convert:
Tried convert_original_stable_diffusion_to_diffusers.py script, it can convert 1.5/2.0 models, but not XL models. Any suggestions please?
Beta Was this translation helpful? Give feedback.
All reactions