p1atdev / LECO

Low-rank adaptation for Erasing COncepts from diffusion models.
https://arxiv.org/abs/2303.07345
Apache License 2.0
307 stars 23 forks source link

`training_method` seems not to work #30

Open Con6924 opened 1 year ago

Con6924 commented 1 year ago

Thanks for your impressive work and clear code!

I find that modifying training_method to 'selfattn' or 'xattn' leads to failure:

create LoRA for U-Net: 0 modules. Traceback (most recent call last): File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 343, in main(args) File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 330, in main train(config, prompts) File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 89, in train optimizer = optimizer_module(network.prepare_optimizer_params(), lr=config.train.lr, **optimizer_kwargs) File "/home/notebook/code/personal/S9049723/Anaconda3/envs/leco/lib/python3.10/site-packages/torch/optim/adamw.py", line 50, in init super().init(params, defaults) File "/home/notebook/code/personal/S9049723/Anaconda3/envs/leco/lib/python3.10/site-packages/torch/optim/optimizer.py", line 187, in init raise ValueError("optimizer got an empty parameter list") ValueError: optimizer got an empty parameter list

According to LoRA implementation, extra LoRA modules are only attached to Conv and Linear modules, so that attn blocks have no LoRA associated. Maybe related code can be removed or refined in your later update.