p1atdev / LECO

Low-rank adaptation for Erasing COncepts from diffusion models.
https://arxiv.org/abs/2303.07345
Apache License 2.0
307 stars 23 forks source link

Request: Option to limit LoRA layers #19

Open torridgristle opened 1 year ago

torridgristle commented 1 year ago

Sometimes I'll get LoRAs that make the image very faded out when I increase the strength, but if I use the LoRA Block Weight extension to ignore the first and last few layers then this faded / low contrast issue goes away with minimal impact on the intended changes. I believe it will also train faster if there's less layers to train.

For example, I would try to limit it to IN05, IN07, IN08, MID, OUT03, OUT04, OUT05, OUT06, OUT07, and the text encoder. This corresponds to the "MIDD" preset for the LoRA Block Weight extension, and won't have an impact on the contrast on the image.