lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.58k stars 844 forks source link

Implementing lora-ctl with webui-forge #68

Open Omegastick opened 9 months ago

Omegastick commented 9 months ago

Is there an existing issue for this?

What would your feature do ?

I'd like to get the sd-webui-loractl extension working with webui-forge. It works with the latest Automatic1111 commit, so I'm guessing one of the more central changes here is making it incompatible.

Currently, it looks like the injected ExtraNetworkLora wrapper params are being inserted (it passes a dummy 1.0), but the new logic isn't being triggered.

I'm open to forking (or rewriting) it myself, but I'd appreciate any guidance on where to start.

Proposed workflow

  1. Install extension
  2. Activate in WebUI
  3. Use \lora:lora_name:0.0@0,1.0@1\ syntax

Additional information

No response

sashasubbbb commented 9 months ago

I agree, this is a much needed extenstion.

Omegastick commented 9 months ago

by any chance do you use civitai helper extention and is that working for you in this fork?

Sorry, I don't use that extension.

rafstahelin commented 9 months ago

second that feature request

311-code commented 9 months ago

I would also love to see this extension working in this, It's so underrated. It's really awesome being able to control at what step a too strong lora kicks in and then kicks out, or mix loras however you want. Here was a recent post that covered it https://old.reddit.com/r/StableDiffusion/comments/1aqlvi0/psa_dont_ignore_the_sdwebuiloractl_extension_for/

cheald commented 9 months ago

Hi there. I'm the author of loractl. I've been out of the SD scene for a bit and don't have bandwidth to adapt loractl to Forge, but it's fundamentally a monkeypatch on A1111's network handling code. It's doing two specific things:

  1. First, it registers itself as the handler for the lora extra network when it's enabled. This enables it to take over parsing for those <lora:...> blocks. A1111 automatically feeds them to the extension.
  2. Second, it patches the network.Network class by hijacking its te_multiplier and unet_multiplier properties, and replacing them with properties which actually call a function in loractl to compute the te/unet multipliers (the lora weight) instead. Those functions consider the current generation step as a part of the value they return.

A1111, on each step, checks to see which extra networks are in play, and at what weights. Fudging the te_multiplier/unet_multiplier weights has the effect of causing A1111 to unapply the previous lora weights and reapply new ones at the new weights given. Forge appears to do something similar which means that the concept should work, in theory! (Edit: After looking at this, it looks like Forge only attempts to set up networks once per image, rather than once per step, so loractl as it's implemented in A1111 would not work)

All the heavy lifting is done by A1111 - all loractl is really doing is changing the te_multiplier and unet_multiplier values on a per-step basis, which just happens to make A1111 do the right thing.

From a quick gander at the Forge network handling, te/unet weights are calculated during network activation, but rather than being taken from overridable properties, they're taken directly from the params parameter passed to activate. To make the loractl concept work, ExtraNetworkLora#activate would have to be monkeypatched with a replacement implentation which would parse the params list with the multi-step parser (that could be lifted directly from loractl's utils) then compute the te/unet weights based on the keyframes parsed from the params, the current step, and the total number of desired steps, like loractl does here.

In theory it shouldn't be too difficult, but it would probably need to be a fork of loractl, since it would be targeting a fundamentally different codebase, and loractl is tiny enough that a little copying is better than a little dependency.

If the authors of Forge wanted to support it, based on a 3-minute review of the code, I think that just directly patching the ExtraNetworkLora#activate method to understand and use the extended syntax should be sufficient to get loractl-style functionality in Forge. It'd certainly be a lot cleaner than trying to patch it in with an extension.

lllyasviel commented 9 months ago

thanks a lot. we will take a look soon about lora system

kyle215ps3 commented 9 months ago

@lllyasviel Thank you!!! This is absolutely needed!!! It's amazing being able to force the initial generation to place the character first before the LoRa does its work. It's even better when paired with BREAK keywords!

davizca commented 9 months ago

Thanks a lot! Will be very handy to have dynamic weighting in LORA's in Forge!

rafstahelin commented 9 months ago

I second that!

kyle215ps3 commented 9 months ago

I third this! :D

BurnZeZ commented 8 months ago

A1111, on each step, checks to see which extra networks are in play, and at what weights. Fudging the te_multiplier/unet_multiplier weights has the effect of causing A1111 to unapply the previous lora weights and reapply new ones at the new weights given.

@cheald Is this computationally expensive?

cheald commented 8 months ago

Yes, it's why loractl tanks it/s during periods that lora weights are changing step by step.

Ppepepe2 commented 7 months ago

Any news on the matter? this extension would so awesome

altoiddealer commented 3 months ago

thanks a lot. we will take a look soon about lora system

@lllyasviel Hello - I see you are very busy, with very important things!

You flagged this issue as "High Priority" - I just wanted to remind you :)

I don't know if this has any hope to fit into your plans, but it would be so nice.

Keep up the amazing work! It is very exciting to have you back

moudahaddad14 commented 1 month ago

any new news on this extension, it might be a game changer for loras in forge tbh! 🤩

altoiddealer commented 1 month ago

Trying to make progress towards this on my branch here with the intention that loractl would be implemented as a builtin-extension

From what little I've done so far, it seems to get hooked in with the Lora networks and dynamically calculate the weights based on the prompt syntax, exactly as it does in A1111.

Screenshot 2024-10-24 131211

It's just a matter of those taking effect....

I shared this over at sd-loractl to see if the author has any further input...

https://github.com/cheald/sd-webui-loractl/issues/32

@Panchovix pinging you because you seemed to have a better understanding of Forge LORA inner workings based on your comments here

https://github.com/Panchovix/stable-diffusion-webui-reForge/issues/36

altoiddealer commented 1 week ago

Guys - I did not so much as even attempt to make any further progress with this.

BUT

@Panchovix DID make the dream happen in ReForge. He has yet to say whether he has ambitions to push it to Forge or not, or comment on whether it is even possible.

I'm just writing this so everyone in love with Forge memory handling, and the original loractl extension - can NOW enjoy that with ReForge.

https://github.com/Panchovix/stable-diffusion-webui-reForge

https://github.com/Panchovix/stable-diffusion-webui-reForge/issues/36

Edit Panchovix has commented that he is going to try making a PR for implementation in Forge.