Closed adbrasi closed 7 months ago
There's no built-in way to do this, but you can do it with sequences and Jinja templating See issue #39. The same approach works for both LoRA weights and attention weights.
I have noticed that when you combine it with other control mechanisms the time to render goes up substantially.
The nodes need to apply the CLIP model to every prompt to create the embeddings, and if you change LoRAs, it needs to unload the existing patches and reload them with changes which can take quite a lot of time, especially if you have limited VRAM and need to offload to CPU. However, it shouldn't take any more time than doing the same manually with just ComfyUI nodes would.
ah, ok! thank you so much for replying!!! Last question (forgive me if it's a very stupid question):
What exactly does [
And this: [in a park:in space:0.4]?
[I've used node and loved it, I just want to make sure I'm not wrong before I teach it to some friends]
EDIT: ah, I see, github ate part pf the syntax...
[lora:somelora:0.5:0.6::0.5]
isn't correct syntax. The <>
characters are important in the lora syntax; you apply a LoRA like in A1111 with <lora:loraname:weight>
.
[a:b:0.3]
in general switches from a to b at timestep 0.3. a and b can be any text prompt, but also any LoRA specification, so, [<lora:cats:0.5>:<lora:cats:0.7>:0.2]
switches from the "cats" LoRA at 0.5 strength to the cats lora at 0.7 strength at 20% percent of the timesteps. (0.2 out of 1, 1 representing the total number of steps)
You can also nest the switches arbitrarily, so then you can have stuff like [a:[b:c:0.5]:0.2]
which is a bit difficult to read but switches from a to b at 0.2 and from b to c at 0.5.
Ah, incredible. Thank you very much!!!! (you're an absolute genius)
Firstly, thank you very much for creating this incredible project! I'm having a bit of difficulty making sure I'm doing things correctly, so I'd like to ask if you have the time, could you please provide an example of a prompt that would allow me to gradually increase the value from (yellow cat:1) to (yellow cat:2)? Additionally, would it be possible to do this with the weights of a LoRa?