hako-mikan / sd-webui-supermerger

model merge extention for stable diffusion web ui
GNU Affero General Public License v3.0
753 stars 112 forks source link

Weight sum using torch.lerp() #278

Closed wkpark closed 1 year ago

wkpark commented 1 year ago

image

Weight sum(lerp) added. same result but 2x~3x faster than normal Weight sum

image

  model A       : Basil_mix_fixed
  model B       : Anything_v3Fixed-prunedFp16
  model C       : Basil_mix_fixed
  alpha,beta    : (0.5, 0.25)
  weights_alpha :
  weights_beta  :
  mode          : Weight sum(lerp)
  MBW           : False
  CalcMode      : normal
  Elemental     :
  Weights Seed  : 4216254361.0
  Adjust        :
Loading weights [Anything_v3Fixed-prunedFp16] from file
Loading weights [Basil_mix_fixed] from file
Stage 1/2: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1131/1131 [00:06<00:00, 164.94it/s]
Stage 2/2: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1131/1131 [00:00<?, ?it/s]
Saving...
Done!
Creating model from config: F:\webui\webui\stable-diffusion-webui\configs\v1-inference.yaml
Loading VAE weights specified in settings: F:\webui\webui\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: sdp... done.
Model loaded in 2.9s (create model: 0.6s, apply weights to model: 1.3s, load VAE: 0.8s)

original Weight sum

  model A       : Basil_mix_fixed
  model B       : Anything_v3Fixed-prunedFp16
  model C       : Basil_mix_fixed
  alpha,beta    : (0.5, 0.25)
  weights_alpha :
  weights_beta  :
  mode          : Weight sum
  MBW           : False
  CalcMode      : normal
  Elemental     :
  Weights Seed  : 4216254361.0
  Adjust        :
Loading weights [Anything_v3Fixed-prunedFp16] from cache
Loading weights [Basil_mix_fixed] from cache
Stage 1/2: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1131/1131 [00:20<00:00, 54.22it/s]
Stage 2/2: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1131/1131 [00:00<00:00, 1131351.73it/s]
Saving...
Done!
Creating model from config: F:\webui\webui\stable-diffusion-webui\configs\v1-inference.yaml
Loading VAE weights specified in settings: F:\webui\webui\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: sdp... done.
Model loaded in 3.7s (create model: 0.6s, apply weights to model: 1.2s, load VAE: 0.7s, load textual inversion embeddings: 0.9s, calculate empty prompt: 0.3s).
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
hako-mikan commented 1 year ago

Great! I also confirmed that the merge results do not change and that the merge speed is improved. Therefore, instead of making it a new method, I applied this to the regular merge. I also applied it to cosine and sum twice. Thank you!