p1atdev / LECO

Low-rank adaptation for Erasing COncepts from diffusion models.
https://arxiv.org/abs/2303.07345
Apache License 2.0
307 stars 23 forks source link

SDXL training doesn't work. #34

Open DarkAlchy opened 9 months ago

DarkAlchy commented 9 months ago

Tried and some are getting a different error than I am, but the train_lora_XL.py script is broken. My error is that the optimizer args is empty.

DarkAlchy commented 9 months ago

ValueError("optimizer got an empty parameter list")

Anyone attempting to use the SDXL script gets that the regular lora works.

ssube commented 8 months ago

I'm running into this while training with SD v1.5 and v2.1 as well, using the latest code from main.

Looking into the error a little bit more, it looks like the list of parameters at https://github.com/p1atdev/LECO/blob/main/train_lora.py#L89 is empty. network.prepare_optimizer_params() is returning an empty list because the list of module types in https://github.com/p1atdev/LECO/blob/main/lora.py#L190 filters everything out.

I'm not sure why those modules are all being filtered, but that leaves an empty LoRA, which the optimizer doesn't like very much.