comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
55.43k stars 5.86k forks source link

Feauture request: Dynamic Lora Weights #2591

Open OliviaOliveiira opened 9 months ago

OliviaOliveiira commented 9 months ago

Hey there! There's an incredibly powerful extension for A1111 called Dynamic Lora Weights which allows one to control the weights of the lora at any given step during the whole generation. For instance - 0.2@0.2,1@0.2 means that lora weight starts at 0.2 till 20% of steps, and then ramps up to 1 from 20% of steps till the end of the generation. Link to extension - https://github.com/cheald/sd-webui-loractl

ltdrdata commented 9 months ago

Take a look at this: https://github.com/asagi4/comfyui-prompt-control

Currently, weight scheduling is not available, but it seems like you can make a feature request to there.

asagi4 commented 9 months ago

You can also combine my prompt control utility with any dynamic prompt utility, like the MUWildCard node from my other repo that implements prompt fusion style functions and variables: https://github.com/asagi4/comfyui-utility-nodes Or you can use the JinjaRender node for even more advanced dynamic prompts.

The syntax isn't quite as nice as sd-webui-loractl, but you can get them to do similar things.

brendanhoar commented 2 months ago

IIRC, discussion about this elsewhere indicated that developers were waiting for resolution of https://github.com/comfyanonymous/ComfyUI/pull/2666 , the execution model inversion changes, before attempting work like this.

Anyone want to take a look at to see if this is more feasible now?

tncrdn commented 1 month ago

You can also combine my prompt control utility with any dynamic prompt utility, like the MUWildCard node from my other repo that implements prompt fusion style functions and variables: https://github.com/asagi4/comfyui-utility-nodes Or you can use the JinjaRender node for even more advanced dynamic prompts.

The syntax isn't quite as nice as sd-webui-loractl, but you can get them to do similar things.

So if I wanted to do something like Lora A starts at first step and stops at 15th step and Lora B starts at 15th step and is active until 60th step in a generation with 60 steps, how should i use prompt control and JinjaRender to do that?

asagi4 commented 1 month ago

@tncrdn You don't need JinjaRender for a simple case like that. You can just do [<lora:A:weight>:<lora:B:weight>:0.25]. Absolute step counts aren't supported (mostly because it's not available at cond generation time) so you'll have to calculate the fraction, but it'll work. JinjaRender may be useful if you want more complicated schedules since typing those by hand is going to be tedious.

tncrdn commented 1 month ago

@asagi4 Thank you very much. And I use PromptToSchedule and ScheduleToModel nodes for that?

asagi4 commented 1 month ago

@tncrdn yes. PromptToSchedule does the parsing and ScheduleToModel applies a model patch that does the LoRA scheduling.

tncrdn commented 1 month ago

@asagi4 Thank you. One last question. Would this be better than using 2 KSampler Advanced nodes and setting start and stop steps?

asagi4 commented 1 month ago

@tncrdn If you use the PCSplitSampling node to enable split sampling, that's essentially what it will do.

The effects are different though. Doing two ksampler passes isn't quite the same thing as one pass with the same number of steps, at least with some samplers.

tncrdn commented 1 month ago

@asagi4 So is this the correct way to use it for one pass with SDXL? (I also used a ScheduleToCond) If you can check the attached workflow please? Also does it work with Flux?

asagi4 commented 1 month ago

@tncrdn that works, though you don't necessarily need two separate schedules for ScheduleToModel and ScheduleToCond; In fact you'll want to pass the LoRAs into ScheduleToCond too if you want the LoRA to apply to the text encoder, because otherwise they'll only apply to the unet.

tncrdn commented 1 month ago

@asagi4 So I wrote the prompt, lora and SDXL(896 1152, 896 1152, 0 0) into one PromptToSchedule node and sent it both to ScheduleToModel and ScheduleToCond. Thank you very much.

mcmonkey4eva commented 1 month ago

Native support for dynamic lora weights is being discussed and will likely happen soon(™️ )

tncrdn commented 1 month ago

@mcmonkey4eva That's great news. Will it support Flux as well? It would also be great if there was native support for prompts like [cat:dog:10] to change the prompt from cat to dog after 10 steps (or in fraction if that's the only possible way). And if it also worked for Flux.

FilipAndersson245 commented 3 weeks ago

Any news on implementing dynamic lora weights?

Veilosity commented 2 weeks ago

@tncrdn yes. PromptToSchedule does the parsing and ScheduleToModel applies a model patch that does the LoRA scheduling.

I'm not quite sure about the syntax for changing the Lora strength in Comfyui. Could you please let me know how to have a Lora run at 0 for the first 10 steps, at 1 from 11 to 20, and 0 again from 21 to 30?

mcmonkey4eva commented 2 weeks ago

The branch tracking work towards dynamic lora weights (and related features) is https://github.com/comfyanonymous/ComfyUI/tree/patch_hooks

Veilosity commented 2 weeks ago

I meant the syntax for using the https://github.com/asagi4/comfyui-prompt-control extension.