Skip to content

How to merge a checkpoint with the main model ? #8270

Answered by simbrams
simbrams asked this question in Q&A
Discussion options

You must be logged in to vote

Hey Simon, When you set pipe.save_pretrained(..., variant='fp32'), do you get unet as you expect?

Hey @StandardAI, thanks a lot for the tip!

I ended up having an error KeyError: 'shortest_edge' when loading the merged model, so I tried to update transformers to the latest version and it worked. (4.41.0)

I also tried to merge the model using default fp16 and it works like a charm 👌

pipe = StableDiffusionPipeline.from_pretrained(
        'me/my-model', 
        torch_dtype=torch.float16,
        local_files_only=True,
        safety_checker=None
    ).to("cuda")

pipe.unet.load_attn_procs(
    'me/my-model', 
    subfolder="checkpoint-10000", 
    weight_name="optimizer.bin",
    cache_dir=

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@simbrams
Comment options

Answer selected by simbrams
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants