cloneofsimo / lora

Using Low-rank adaptation to quickly fine-tune diffusion models.
https://arxiv.org/abs/2106.09685
Apache License 2.0
7k stars 480 forks source link

Alpha is ignored when merging LoRA's #187

Closed Kidel closed 1 year ago

Kidel commented 1 year ago

How to reproduce:

  1. merge the same LoRA with itself at different alpha_1 and alpha_2 values using different output files
  2. test out the different output loRA's and notice they produce the same images

I tried with both

lora_add "D:\GitHub\stable-diffusion-webui\models\Lora\jim_lee.safetensors" "D:\GitHub\stable-diffusion-webui\models\Lora\jim_lee.safetensors" "D:\GitHub\stable-diffusion-webui\models\Lora\jim_lee_offset6.safetensors" --alpha_1=0.6 --alpha_2=0

and

lora_add "D:\GitHub\stable-diffusion-webui\models\Lora\jim_lee.safetensors" "D:\GitHub\stable-diffusion-webui\models\Lora\jim_lee.safetensors" "D:\GitHub\stable-diffusion-webui\models\Lora\jim_lee_offset7.safetensors" 0.6 0

Using different values on different files. The different output files all produce the same exact images even if they've been generated with different alpha values.

Kidel commented 1 year ago

this still doesn't work, but I made my own script anyway.

cloneofsimo commented 1 year ago

I think the lora you are using is trained with koyha not mine, is that correct?

Kidel commented 1 year ago

Yes