hako-mikan / sd-webui-supermerger

model merge extention for stable diffusion web ui
GNU Affero General Public License v3.0
753 stars 111 forks source link

can't merge Loras to checkpoints since December updates #324

Closed e-wolf29 closed 9 months ago

e-wolf29 commented 10 months ago

Hi šŸ™‚ Ever since the December updates of SD NEXT (currently running on the 2023-12-30 update) and Supermerger, I have been experiencing issues merging Loras to my checkpoints using SuperMerger (current and even previous commits). This is a problem I have never encountered prior to the December updates. And every time, it still stop at 27% on the 1st LORA.

SD NEXT is currently running on the 2023-12-30 update and SuperMerger version is the latest.

Plus LoRA start Loading anime-toon/toonyou_beta6 Loading model: /home/kubuntu/automatic/models/Stable-diffusion/anime-toon/toonyou_beta6.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 0.0/2.3 GB -:--:-- Loading model: /home/kubuntu/automatic/models/Stable-diffusion/anime-toon/toonyou_beta6.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 0.0/2.3 GB -:--:-- 17:34:33-793368 INFO Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True
bf16=False
17:34:38-840136 INFO LDM: LatentDiffusion: mode=eps
17:34:38-841261 INFO LDM: DiffusionWrapper params=859.52M
17:34:38-842008 INFO Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline
file="/home/kubuntu/automatic/models/Stable-diffusion/anime-toon/toonyou_beta6.safetensors" size=2193MB
Calculating hash: /home/kubuntu/automatic/models/Stable-diffusion/anime-toon/toonyou_beta6.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 2.3/2.3 GB 0:00:00 Loading model: models/VAE/vae-ft-mse-840000-ema-pruned.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 0.0/334.6 MB -:--:-- 17:34:43-815356 INFO Cross-attention: optimization=Scaled-Dot-Product options=['SDP disable memory attention']
17:34:43-880808 INFO Model loaded in 10.12 { create=5.04 apply=2.50 vae=2.02 move=0.45 embeddings=0.06 }
17:34:44-312598 INFO Model load finished: {'ram': {'used': 9.07, 'total': 31.26}, 'gpu': {'used': 2.24, 'total': 15.98}, 'retries': 0, 'oom': 0}
cached=0
Loading model: models/Lora/enhancers ā„ sliders/add_detail.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 0.0/37.9 MB -:--:-- Calculating hash: models/Lora/enhancers ā„ sliders/add_detail.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 37.9/37.9 MB 0:00:00 Loading model: models/Lora/enhancers ā„ sliders/more_details.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 0.0/9.5 MB -:--:-- Calculating hash: models/Lora/enhancers ā„ sliders/more_details.safetensors ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā”ā” 9.5/9.5 MB 0:00:00 add_detail: Successfully set the ratio [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0] more_details: Successfully set the ratio [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0] add_detail: 27%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–Ž | 72/264 [00:00<00:00, 195.66it/s] Traceback (most recent call last): File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/gradio/queueing.py", line 388, in call_prediction output = await route_utils.call_process_api( File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/gradio/route_utils.py", line 219, in call_process_api output = await app.get_blocks().process_api( File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1437, in process_api result = await self.call_function( File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1109, in call_function prediction = await anyio.to_thread.run_sync( File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread return await future File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, args) File "/home/kubuntu/automatic/venv/lib/python3.10/site-packages/gradio/utils.py", line 641, in wrapper response = f(args, *kwargs) File "/home/kubuntu/automatic/extensions/sd-webui-supermerger/scripts/mergers/pluslora.py", line 729, in pluslora theta_0 = newpluslora(theta_0,filenames,lweis,names, isxl,isv2, keychanger) File "/home/kubuntu/automatic/extensions/sd-webui-supermerger/scripts/mergers/pluslora.py", line 827, in newpluslora theta_0[wkey], theta_0[bkey]= plusweights(theta_0[wkey], module, bias = theta_0[bkey]) File "/home/kubuntu/automatic/extensions/sd-webui-supermerger/scripts/mergers/pluslora.py", line 848, in plusweights updown = module.calc_updown(weight.to(dtype=torch.float)) File "/home/kubuntu/automatic/extensions-builtin/Lora/network_lora.py", line 67, in calc_updown return self.finalize_updown(updown, target, output_shape) File "/home/kubuntu/automatic/extensions-builtin/Lora/network.py", line 142, in finalize_updown return updown self.calc_scale() * self.multiplier(), ex_bias File "/home/kubuntu/automatic/extensions-builtin/Lora/network.py", line 122, in multiplier return self.network.unet_multiplier[0] TypeError: 'float' object is not subscriptable

image

Your help on this would be much appreciated as I do merge checkpoints and LORAs on a daily basis.

Muxropendiy commented 10 months ago

I confirm, I have the same problem, with an identical error message. Unfortunately, on my amd card, the original automatic1111 does not work. It would be great if this extension would work again in sd.next.

This change probably caused the problem: https://github.com/vladmandic/automatic/commit/323e2c142c15f3bd36e75eb5d688a42e7aa026cb

nCoderGit commented 9 months ago

Yup, I think Muxropendiy is right. As far as I can see, A1111 and SD.Next do handle the else case in multiplier() of both their extensions-builtin\Lora\network.py scripts differently, though.


Automatic1111

    def multiplier(self):
        if 'transformer' in self.sd_key[:20]:
            return self.network.te_multiplier
        else:
            return self.network.unet_multiplier

(Does it work for a1111 users? Could someone test if merging a lora to a checkpoint works there?)


SD.Next

    def multiplier(self):
        if 'transformer' in self.sd_key[:20]:
            return self.network.te_multiplier
        if "down_blocks" in self.sd_key:
            return self.network.unet_multiplier[0]
        if "mid_block" in self.sd_key:
            return self.network.unet_multiplier[1]
        if "up_blocks" in self.sd_key:
            return self.network.unet_multiplier[2]
        else:
            return self.network.unet_multiplier[0]

(when block weights are merged at around 27% (see below), the else case triggers tries to use unet_multiplier (being a float) as it where an array, which obviously won't work because you can't index the nth number of a float.)


No idea if this is helpful, but I tried to track down what's going on and added a few debug statements:

-----
[ ... ]
in "if bkey in theta_0:keys():"
 bkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias>,
 wkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight>
in plusweights(): module=<<network_lora.NetworkModuleLora object at 0x000001AF4846B2B0>>,
 weight = <tensor([[ 0.0219,  0.0303,  0.0232,  ...,  0.0069, -0.0138,  0.0303],
        [-0.0037, -0.0036,  0.0076,  ...,  0.0149,  0.0174, -0.0006],
        [ 0.0253,  0.0140,  0.0116,  ..., -0.0027,  0.0090, -0.0150],
        ...,
        [-0.0066, -0.0137, -0.0082,  ..., -0.0078,  0.0200,  0.0083],
        [ 0.0005, -0.0259,  0.0145,  ...,  0.0027, -0.0160,  0.0306],
        [ 0.0138, -0.0073,  0.0011,  ..., -0.0140,  0.0077,  0.0189]],
       dtype=torch.float16)>

-----
in "if bkey in theta_0:keys():"
 bkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias>,
 wkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight>
in plusweights(): module=<<network_lora.NetworkModuleLora object at 0x000001AF4846AF50>>,
 weight = <tensor([[-0.0159, -0.0051,  0.0051,  ..., -0.0172, -0.0129,  0.0081],
        [ 0.0171,  0.0210, -0.0006,  ..., -0.0097, -0.0084, -0.0060],
        [ 0.0046,  0.0098,  0.0007,  ...,  0.0308, -0.0147, -0.0191],
        ...,
        [-0.0186, -0.0007,  0.0145,  ..., -0.0108,  0.0038, -0.0074],
        [ 0.0075, -0.0092,  0.0103,  ...,  0.0183,  0.0177, -0.0251],
        [-0.0071,  0.0016, -0.0272,  ..., -0.0281, -0.0008,  0.0159]],
       dtype=torch.float16)>

-----
in "if bkey in theta_0:keys():"
 bkey=<model.diffusion_model.input_blocks.1.1.proj_in.bias>,
 wkey=<model.diffusion_model.input_blocks.1.1.proj_in.weight>
in plusweights(): module=<<network_lora.NetworkModuleLora object at 0x000001AF48468520>>,
 weight = <tensor([[[[ 0.0170]],

         [[ 0.0283]],

         [[-0.0441]],

         ...,

         [[ 0.1027]],

         [[ 0.0273]],

         [[-0.0338]]],

        [[[ 0.0337]],

         [[-0.0691]],

         [[-0.0068]],

         ...,

         [[-0.0171]],

         [[-0.0748]],

         [[-0.0323]]],

        [[[-0.0365]],

         [[ 0.0551]],

         [[ 0.0151]],

         ...,

         [[-0.0613]],

         [[-0.0473]],

         [[-0.0157]]],

        ...,

        [[[-0.0137]],

         [[-0.0490]],

         [[-0.1021]],

         ...,

         [[ 0.0093]],

         [[-0.0475]],

         [[-0.0419]]],

        [[[-0.0077]],

         [[ 0.0140]],

         [[ 0.0246]],

         ...,

         [[ 0.0256]],

         [[-0.0869]],

         [[-0.0174]]],

        [[[ 0.1021]],

         [[ 0.0216]],

         [[ 0.0489]],

         ...,

         [[ 0.0948]],

         [[-0.0406]],

         [[-0.0067]]]], dtype=torch.float16)>
in multiplier(): ELSE
navi_KawaiiTech-20:  27%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ                                                           | 72/264 [00:00<00:02, 83.71it/s]
Traceback (most recent call last):
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
    output = await route_utils.call_process_api(
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\blocks.py", line 1437, in process_api
    result = await self.call_function(
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\blocks.py", line 1109, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\utils.py", line 641, in wrapper
    response = f(*args, **kwargs)
  File "C:\StableDiffusion\SDNext\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 729, in pluslora
    theta_0 = newpluslora(theta_0,filenames,lweis,names, isxl,isv2, keychanger)
  File "C:\StableDiffusion\SDNext\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 828, in newpluslora
    theta_0[wkey], theta_0[bkey]= plusweights(theta_0[wkey], module, bias = theta_0[bkey])
  File "C:\StableDiffusion\SDNext\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 850, in plusweights
    updown = module.calc_updown(weight.to(dtype=torch.float))
  File "C:\StableDiffusion\SDNext\extensions-builtin\Lora\network_lora.py", line 67, in calc_updown
    return self.finalize_updown(updown, target, output_shape)
  File "C:\StableDiffusion\SDNext\extensions-builtin\Lora\network.py", line 143, in finalize_updown
    return updown * self.calc_scale() * self.multiplier(), ex_bias
  File "C:\StableDiffusion\SDNext\extensions-builtin\Lora\network.py", line 123, in multiplier
    return self.network.unet_multiplier[0]
TypeError: 'float' object is not subscriptable

Note that once bkey / wkey are equal to model.diffusion_model.input_blocks.1.1.proj_in.bias / .weight, things get a bit weird with the format and that's the first time, the in multiplier(): ELSE case fires since I started the Merge checkpoint to Lora button.

e-wolf29 commented 9 months ago

It does work if you run Supermerger on the A1111 Web UI... afaik EDIT : the latest update of SD Next (Master branch) totally fixed the problem, what 'bout you @Muxropendiy ?

nCoderGit commented 9 months ago

Can confirm! Works perfectly now on both original and diffusers backend :)

Muxropendiy commented 9 months ago

It works!