Skip to content

Conversation

lavinal712
Copy link
Contributor

@lavinal712 lavinal712 commented Jan 30, 2025

This PR is a continuation of the following discussion #4679 #4899, and it addresses the following issues:

  1. Loading SAI's control-lora files and enabling controlled image generation.
  2. Building a control-lora Pipeline and Model to facilitate user convenience.

This code is only an initial version and contains many makeshift solutions as well as several issues. Currently, these are the observations I have made:

  1. Long Loading Time: I suspect this is due to repeatedly loading the model weights.
  2. High GPU Memory Usage During Runtime: Compared to a regular ControlNet, Control-Lora should actually save GPU memory during runtime (this phenomenon can be observed in sd-webui-controlnet). I believe that the relevant parts of the code have not been handled properly.

@lavinal712
Copy link
Contributor Author

图像 (1)

To reproduce, run

cd src
python -m diffusers.pipelines.control_lora.pipeline_control_lora_sd_xl

@sayakpaul sayakpaul self-requested a review January 30, 2025 03:23
@lavinal712
Copy link
Contributor Author

My solution was referenced from: https://github.com/Mikubill/sd-webui-controlnet/blob/main/scripts/controlnet_lora.py and https://github.com/HighCWu/control-lora-v2/blob/master/models/control_lora.py, but it differs in several ways. Here are my observations and solutions:

  1. The weight format of control-lora differs from that of the lora in the peft library; it comprises two parts: lora weights and fine-tuned parameter weights. The lora weights have suffixes "up" and "down". From my observation, we cannot use existing libraries to load these weights (I once worked on reproducing it at https://github.com/lavinal712/control-lora-v3, which includes training lora and specific layers and converting their weight names from diffusers to stable diffusion with good results).
  2. The prefix of control-lora's weight names follows the stable diffusion format, which poses some challenges when converting to the diffusers format (I had to use some hacky code to solve this issue).
  3. My approach is as follows: I converted linear and conv2d layers into a form with lora applied across all layers. Then, I used unet to restore controlnet, loading both lora weights and trained parameters using control-lora.

else:
raise ValueError

config = ControlNetModel.load_config("xinsir/controlnet-canny-sdxl-1.0")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cannot, because control-lora does not provide a config.json file

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for starting this!

In order to get this PR ready for reviews, we would need to:

  • Use peft for all things LoRA instead of having to rely on things like LinearWithLoRA.
  • We should be able to run the LoRA conversion on the checkpoint during loading like how it's done for other LoRA checkpoints. Here is an example.
  • Ideally, users should be able to call ControlNetModel.load_lora_adapter() (method reference) on a state dict and we run the conversion first if needed and then take rest of the steps.

The higher-level design I am thinking of goes as follows:

controlnet = # initialize ControlNet model.

# load ControlNet-LoRA into `controlnet`
controlnet.load_lora_adapter("stabilityai/control-lora", weight_name="...")

pipeline = # initialize ControlNet pipeline.

...

LMK if this makes sense. Happy to elaborate further.

@lavinal712
Copy link
Contributor Author

lavinal712 commented Jan 30, 2025

Thanks for starting this!感谢你开始这个!

In order to get this PR ready for reviews, we would need to:为了让这个 PR 准备好接受审查,我们需要:

  • Use peft for all things LoRA instead of having to rely on things like LinearWithLoRA.使用 peft 来处理所有 LoRA 相关的事情,而不是依赖像 LinearWithLoRA 这样的东西。
  • We should be able to run the LoRA conversion on the checkpoint during loading like how it's done for other LoRA checkpoints. Here is an example.我们应该能够在加载检查点时运行 LoRA 转换,就像对其他 LoRA 检查点所做的那样。这里有一个示例。
  • Ideally, users should be able to call ControlNetModel.load_lora_adapter() (method reference) on a state dict and we run the conversion first if needed and then take rest of the steps.理想情况下,用户应该能够在状态字典上调用 ControlNetModel.load_lora_adapter() (方法引用),如果需要,我们先运行转换,然后执行其余步骤。

The higher-level design I am thinking of goes as follows:我正在考虑的高层设计如下:

controlnet = # initialize ControlNet model.

# load ControlNet-LoRA into `controlnet`
controlnet.load_lora_adapter("stabilityai/control-lora", weight_name="...")

pipeline = # initialize ControlNet pipeline.

...

LMK if this makes sense. Happy to elaborate further.如果这有意义,请告诉我。乐意进一步详细说明。

I hold a reserved attitude because I have observed that the required memory for control-lora is less than that for controlnet, yet running it in this manner requires at least as much memory as controlnet. I want control-lora not only to be a lora but also to be a memory-saving model. Of course, the existing code cannot handle this yet, and it will require future improvements.

@sayakpaul
Copy link
Member

I want control-lora not only to be a lora but also to be a memory-saving model.

If we do incorporate peft (the way I am suggesting), it will be compatible with all the memory optims we already offer from the library.

@lavinal712
Copy link
Contributor Author

If we do incorporate peft (the way I am suggesting), it will be compatible with all the memory optims we already offer from the library.

I once observed while running sd-controlnet-webui that the peak VRAM usage was 5.9GB when using sd1.5 controlnet, and it was 4.7GB when using sd1.5 control-lora. Clearly, sd-controlnet-webui employs some method to reuse weights rather than simply merging the lora weights on top of controlnet. Can loading controlnet in this manner provide such VRAM optimization?

@sayakpaul
Copy link
Member

I am quite sure we can achieve those numbers without having to do too much given the recent set of optimizations we have shipped and are going to ship.

Clearly, sd-controlnet-webui employs some method to reuse weights rather than simply merging the lora weights on top of controlnet.

We're not merging the LoRA weights into the base model when initially loading the LoRA checkpoint. That goes against our LoRA design. Users can always merge the LoRA params into the base model params after loading the LoRA params but that is not the default behaviour.

@lavinal712
Copy link
Contributor Author

Good, resolving this concern, I believe such a design is reasonable. It is simpler and more user-friendly.

@sayakpaul
Copy link
Member

Appreciate the understanding. LMK if you would like to take a crack at the suggestions I provided above.

@lavinal712
Copy link
Contributor Author

I encountered a problem: after running the command python -m diffusers.pipelines.control_lora.control_lora, the following error occurred:

Traceback (most recent call last):
  File "/home/azureuser/miniconda3/envs/diffusers/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/azureuser/miniconda3/envs/diffusers/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/pipelines/control_lora/control_lora.py", line 19, in <module>
    controlnet.load_lora_weights(lora_id, weight_name=lora_filename, controlnet_config=controlnet.config)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/controlnet.py", line 178, in load_lora_weights
    self.load_lora_into_controlnet(
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/controlnet.py", line 212, in load_lora_into_controlnet
    controlnet.load_lora_adapter(
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/peft.py", line 293, in load_lora_adapter
    is_model_cpu_offload, is_sequential_cpu_offload = self._optionally_disable_offloading(_pipeline)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/peft.py", line 139, in _optionally_disable_offloading
    return _func_optionally_disable_offloading(_pipeline=_pipeline)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/lora_base.py", line 435, in _func_optionally_disable_offloading
    if _pipeline is not None and _pipeline.hf_device_map is None:
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/models/modeling_utils.py", line 187, in __getattr__
    return super().__getattr__(name)
  File "/home/azureuser/miniconda3/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
    raise AttributeError(
AttributeError: 'ControlNetModel' object has no attribute 'hf_device_map'

You can read the code. Does this method meet your expectations?

@lavinal712
Copy link
Contributor Author

@sayakpaul Can you help me solve this problem?

@sayakpaul
Copy link
Member

Can you try to help me understand why python -m diffusers.pipelines.control_lora.control_lora is needed to be run?

@lavinal712
Copy link
Contributor Author

My design is as follows: the core code is located in src/diffusers/loaders/controlnet.py, and ControlNetLoadersMixin is set as the parent class of ControlNetModel in src/diffusers/models/controlnets/controlnet.py, providing the implementation of the function load_lora_weights. The diffusers.pipelines.control_lora.control_lora is a test code with the purpose of loading LoRA into ControlNetModel, but it should eventually be cleaned up.

@sayakpaul
Copy link
Member

load_lora_weights() is implemented at the pipeline-level. ControlNetModel is subclassed by ModelMixin. So, we will have to rather implement the load_lora_adapters() method:

def load_lora_adapter(self, pretrained_model_name_or_path_or_dict, prefix="transformer", **kwargs):

@lavinal712
Copy link
Contributor Author

I'm having trouble converting the prefix of control-lora into the diffusers format. The prefix of control-lora is in the sd format, while the loaded controlnet is in the diffusers format. I can't find a clean and efficient way to achieve the conversion. Could you provide some guidance? @sayakpaul

@sayakpaul
Copy link
Member

I'm having trouble converting the prefix of control-lora into the diffusers format. The prefix of control-lora is in the sd format, while the loaded controlnet is in the diffusers format. I can't find a clean and efficient way to achieve the conversion. Could you provide some guidance? @sayakpaul

You could refer to the following function to get a sense of how we do it for other non-diffusers LoRAs:

def _convert_non_diffusers_lora_to_diffusers(state_dict, unet_name="unet", text_encoder_name="text_encoder"):

Would this help?

@lavinal712
Copy link
Contributor Author

I tried to load Control-LORA in the load_lora_adapters() function of the PeftAdapterMixin class. However, by default, the keys for the model weights are in the form of lora_A.default_0.weight instead of the expected lora_A.weight. This is caused by adapter_name = get_adapter_name(self). Could you please tell me what the default format of the LoRA model weight keys is and how to resolve this issue? @sayakpaul

@sayakpaul
Copy link
Member

I think the easiest might to have a class for Control LoRA overridden from PeftAdapterMixin and override the load_lora_adapter() method. We can handle the state dict conversion directly there so that SD format is first converted into the peft format. WDYT?

@lavinal712
Copy link
Contributor Author

I think the easiest might to have a class for Control LoRA overridden from PeftAdapterMixin and override the load_lora_adapter() method. We can handle the state dict conversion directly there so that SD format is first converted into the peft format. WDYT?

Is there any example?

@sayakpaul
Copy link
Member

There is none but here is how it may look like in terms of pseudo-code:

class ControlLoRAMixin(PeftAdapterMixin):
    def load_lora_adapter(...):
        state_dict = # convert the state dict from SD format to peft format.
        ...
        # proceed with the rest of the logic.

@lavinal712
Copy link
Contributor Author

Okay, I will give it a try.

@lavinal712
Copy link
Contributor Author

lavinal712 commented Jul 2, 2025

I plan to stop updating this branch and start from a fresh branch, which will help me adapt the code better. When new branch works, I will close it.

@lavinal712
Copy link
Contributor Author

lavinal712 commented Jul 4, 2025

Oh, dear! Worse and worse.

20250704-183537

@lavinal712
Copy link
Contributor Author

lavinal712 commented Jul 5, 2025

@sayakpaul Please review this PR. I fixed an imprecise part in peft_util.py and modified the config to make Control-Lora work properly. You can clone this branch and run the code mentioned above.

@lavinal712
Copy link
Contributor Author

image

@lavinal712
Copy link
Contributor Author

lavinal712 commented Jul 15, 2025

Monday left me broken
Tuesday I was through with hoping
Wednesday my empty arms were open
Thursday waiting for love waiting for love

@iwr-redmond
Copy link

Thanks for keeping the PR branch up to date, @lavinal712. You may wish to add some minimal reproducible examples that will assist with testing when @sayakpaul et al are ready to review it again.

@lavinal712
Copy link
Contributor Author

lavinal712 commented Aug 19, 2025

@iwr-redmond Wait a moment, syncing the diffusers library has affected Control-Lora. The version where the control-lora can be reproduced is 23cba18.

Goodness, this is already the third time encountering this situation.

@sayakpaul
Copy link
Member

syncing the diffusers library has affected Control-Lora.

How did this affect?

@lavinal712
Copy link
Contributor Author

syncing the diffusers library has affected Control-Lora.

How did this affect?

I've located the error. It's in this code:

if prefix is not None:
    state_dict = {k.removeprefix(f"{prefix}."): v for k, v in state_dict.items() if k.startswith(f"{prefix}.")}
    if metadata is not None:
        metadata = {k.removeprefix(f"{prefix}."): v for k, v in metadata.items() if k.startswith(f"{prefix}.")}

The default prefix="transformers" causes the LoRA loader to fail in finding the corresponding weights. I fixed this issue by setting prefix=None in the test code.

@lavinal712
Copy link
Contributor Author

hf-logo_canny

@lavinal712
Copy link
Contributor Author

lavinal712 commented Aug 20, 2025

@iwr-redmond @sayakpaul I provide an example of control_lora in https://github.com/lavinal712/diffusers/tree/control-lora/examples/research_projects/control_lora folder.

@iwr-redmond
Copy link

Do the BFL Flux Control LoRAs (Canny, Depth) also work? The comment in peft.py is slightly ambiguous.

@lavinal712
Copy link
Contributor Author

@iwr-redmond I remember that someone else has implemented BFL Flux Control LoRAs, using a different approach from the one described in this PR. The Control-LoRA mentioned in this PR specifically refers to the model introduced in https://huggingface.co/stabilityai/control-lora, designed for use with the SDXL model.

@iwr-redmond
Copy link

iwr-redmond commented Aug 21, 2025

As you correctly say: src/diffusers/loaders/lora_conversion_utils.py#L1052

You may wish to change some of the naming conventions in this PR to underscore that the new code is for SAI/SDXL Control LoRAs. The generic naming could be confusing in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants