Closed e-wolf29 closed 9 months ago
I confirm, I have the same problem, with an identical error message. Unfortunately, on my amd card, the original automatic1111 does not work. It would be great if this extension would work again in sd.next.
This change probably caused the problem: https://github.com/vladmandic/automatic/commit/323e2c142c15f3bd36e75eb5d688a42e7aa026cb
Yup, I think Muxropendiy is right.
As far as I can see, A1111 and SD.Next do handle the else
case in multiplier()
of both their extensions-builtin\Lora\network.py
scripts differently, though.
Automatic1111
def multiplier(self):
if 'transformer' in self.sd_key[:20]:
return self.network.te_multiplier
else:
return self.network.unet_multiplier
(Does it work for a1111 users? Could someone test if merging a lora to a checkpoint works there?)
SD.Next
def multiplier(self):
if 'transformer' in self.sd_key[:20]:
return self.network.te_multiplier
if "down_blocks" in self.sd_key:
return self.network.unet_multiplier[0]
if "mid_block" in self.sd_key:
return self.network.unet_multiplier[1]
if "up_blocks" in self.sd_key:
return self.network.unet_multiplier[2]
else:
return self.network.unet_multiplier[0]
(when block weights are merged at around 27% (see below), the else
case triggers tries to use unet_multiplier (being a float) as it where an array, which obviously won't work because you can't index the nth number of a float.)
No idea if this is helpful, but I tried to track down what's going on and added a few debug statements:
-----
[ ... ]
in "if bkey in theta_0:keys():"
bkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias>,
wkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight>
in plusweights(): module=<<network_lora.NetworkModuleLora object at 0x000001AF4846B2B0>>,
weight = <tensor([[ 0.0219, 0.0303, 0.0232, ..., 0.0069, -0.0138, 0.0303],
[-0.0037, -0.0036, 0.0076, ..., 0.0149, 0.0174, -0.0006],
[ 0.0253, 0.0140, 0.0116, ..., -0.0027, 0.0090, -0.0150],
...,
[-0.0066, -0.0137, -0.0082, ..., -0.0078, 0.0200, 0.0083],
[ 0.0005, -0.0259, 0.0145, ..., 0.0027, -0.0160, 0.0306],
[ 0.0138, -0.0073, 0.0011, ..., -0.0140, 0.0077, 0.0189]],
dtype=torch.float16)>
-----
in "if bkey in theta_0:keys():"
bkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias>,
wkey=<cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight>
in plusweights(): module=<<network_lora.NetworkModuleLora object at 0x000001AF4846AF50>>,
weight = <tensor([[-0.0159, -0.0051, 0.0051, ..., -0.0172, -0.0129, 0.0081],
[ 0.0171, 0.0210, -0.0006, ..., -0.0097, -0.0084, -0.0060],
[ 0.0046, 0.0098, 0.0007, ..., 0.0308, -0.0147, -0.0191],
...,
[-0.0186, -0.0007, 0.0145, ..., -0.0108, 0.0038, -0.0074],
[ 0.0075, -0.0092, 0.0103, ..., 0.0183, 0.0177, -0.0251],
[-0.0071, 0.0016, -0.0272, ..., -0.0281, -0.0008, 0.0159]],
dtype=torch.float16)>
-----
in "if bkey in theta_0:keys():"
bkey=<model.diffusion_model.input_blocks.1.1.proj_in.bias>,
wkey=<model.diffusion_model.input_blocks.1.1.proj_in.weight>
in plusweights(): module=<<network_lora.NetworkModuleLora object at 0x000001AF48468520>>,
weight = <tensor([[[[ 0.0170]],
[[ 0.0283]],
[[-0.0441]],
...,
[[ 0.1027]],
[[ 0.0273]],
[[-0.0338]]],
[[[ 0.0337]],
[[-0.0691]],
[[-0.0068]],
...,
[[-0.0171]],
[[-0.0748]],
[[-0.0323]]],
[[[-0.0365]],
[[ 0.0551]],
[[ 0.0151]],
...,
[[-0.0613]],
[[-0.0473]],
[[-0.0157]]],
...,
[[[-0.0137]],
[[-0.0490]],
[[-0.1021]],
...,
[[ 0.0093]],
[[-0.0475]],
[[-0.0419]]],
[[[-0.0077]],
[[ 0.0140]],
[[ 0.0246]],
...,
[[ 0.0256]],
[[-0.0869]],
[[-0.0174]]],
[[[ 0.1021]],
[[ 0.0216]],
[[ 0.0489]],
...,
[[ 0.0948]],
[[-0.0406]],
[[-0.0067]]]], dtype=torch.float16)>
in multiplier(): ELSE
navi_KawaiiTech-20: 27%|āāāāāāāāāāāāāāāāāāāāāā | 72/264 [00:00<00:02, 83.71it/s]
Traceback (most recent call last):
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\StableDiffusion\SDNext\venv\lib\site-packages\gradio\utils.py", line 641, in wrapper
response = f(*args, **kwargs)
File "C:\StableDiffusion\SDNext\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 729, in pluslora
theta_0 = newpluslora(theta_0,filenames,lweis,names, isxl,isv2, keychanger)
File "C:\StableDiffusion\SDNext\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 828, in newpluslora
theta_0[wkey], theta_0[bkey]= plusweights(theta_0[wkey], module, bias = theta_0[bkey])
File "C:\StableDiffusion\SDNext\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 850, in plusweights
updown = module.calc_updown(weight.to(dtype=torch.float))
File "C:\StableDiffusion\SDNext\extensions-builtin\Lora\network_lora.py", line 67, in calc_updown
return self.finalize_updown(updown, target, output_shape)
File "C:\StableDiffusion\SDNext\extensions-builtin\Lora\network.py", line 143, in finalize_updown
return updown * self.calc_scale() * self.multiplier(), ex_bias
File "C:\StableDiffusion\SDNext\extensions-builtin\Lora\network.py", line 123, in multiplier
return self.network.unet_multiplier[0]
TypeError: 'float' object is not subscriptable
Note that once bkey / wkey are equal to model.diffusion_model.input_blocks.1.1.proj_in.bias
/ .weight
, things get a bit weird with the format and that's the first time, the in multiplier(): ELSE
case fires since I started the Merge checkpoint to Lora button.
It does work if you run Supermerger on the A1111 Web UI... afaik EDIT : the latest update of SD Next (Master branch) totally fixed the problem, what 'bout you @Muxropendiy ?
Can confirm! Works perfectly now on both original and diffusers backend :)
It works!
Hi š Ever since the December updates of SD NEXT (currently running on the 2023-12-30 update) and Supermerger, I have been experiencing issues merging Loras to my checkpoints using SuperMerger (current and even previous commits). This is a problem I have never encountered prior to the December updates. And every time, it still stop at 27% on the 1st LORA.
SD NEXT is currently running on the 2023-12-30 update and SuperMerger version is the latest.
Your help on this would be much appreciated as I do merge checkpoints and LORAs on a daily basis.