s1dlx / meh

Merging Execution Helper
MIT License
37 stars 10 forks source link

error - deal with inpainting #42

Open DirtyHamster opened 1 year ago

DirtyHamster commented 1 year ago

Run 1: H:\Users\adamf\AI_Progs\sd-meh-merge\meh>merge_models.py -a H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster4_50prune.safetensors -b H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\dreamshaper_7-inpainting.safetensors -m weighted_sum -p 32 -o H:\Users\adamf\AI_Progs\AI_Models\test\inpainttest -f safetensors -ba 0.5 -bb 0.5 -pr -rb -rbi 10

Run 2: without rebasin.

merge_models.py -a H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster4_50prune.safetensors -b H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\dreamshaper_7-inpainting.safetensors -m weighted_sum -p 32 -o H:\Users\adamf\AI_Progs\AI_Models\test\inpainttest -f safetensors -ba 0.5 -bb 0.5 -pr

Errors:

Run 1: before loading models: 0.000 loading: H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster4_50prune.safetensors loading: H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\dreamshaper_7-inpainting.safetensors models loaded: 0.000 permuting 0 iteration start: 0.000 weights & bases, before simple merge: 0.000 stage 1: 100%|████████████████████████████████████████████████████████████████████▉| 1130/1131 [01:15<00:00, 14.94it/s] Traceback (most recent call last): File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 151, in main() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 132, in main merged = merge_models( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 146, in merge_models merged = rebasin_merge( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 286, in rebasin_merge thetas["model_a"] = simple_merge( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 244, in simple_merge res.result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 451, in result return self.get_result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 403, in get_result raise self._exception File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, *self.kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 342, in simple_merge_key with merge_key_context(key, thetas, args, kwargs) as result: File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter return next(self.gen) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 428, in merge_key_context result = merge_key(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 403, in merge_key merged_key = merge_method(merge_args).to(storage_device) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge_methods.py", line 27, in weighted_sum return (1 - alpha) a + alpha * b RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1

Run 2:

before loading models: 0.000 loading: H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster4_50prune.safetensors loading: H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\dreamshaper_7-inpainting.safetensors models loaded: 0.000 stage 1: 100%|███████████████████████████████████████████████████████████████████▉| 1130/1131 [00:06<00:00, 185.51it/s] Traceback (most recent call last): File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 151, in main() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 132, in main merged = merge_models( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 162, in merge_models merged = simple_merge( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 244, in simple_merge res.result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 451, in result return self.get_result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 403, in get_result raise self._exception File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, *self.kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 342, in simple_merge_key with merge_key_context(key, thetas, args, kwargs) as result: File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter return next(self.gen) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 428, in merge_key_context result = merge_key(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 403, in merge_key merged_key = merge_method(merge_args).to(storage_device) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge_methods.py", line 27, in weighted_sum return (1 - alpha) a + alpha * b RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1

s1dlx commented 1 year ago

what about using the inpainting model as a?

DirtyHamster commented 1 year ago

Similar errors: The order of the RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1 swaps to RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1

H:\Users\adamf\AI_Progs\sd-meh-merge\meh>merge_models.py -a H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\sd-v1-5-inpainting.ckpt -b H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster16_50prune.safetensors -m weighted_sum -p 32 -o H:\Users\adamf\AI_Progs\AI_Models\test\testggg -f safetensors -ba 0.5 -bb 0.5 -pr -rb -rbi 1 before loading models: 0.000 loading: H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\sd-v1-5-inpainting.ckpt loading: H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster16_50prune.safetensors models loaded: 0.000 permuting 0 iteration start: 0.000 weights & bases, before simple merge: 0.000 stage 1: 100%|████████████████████████████████████████████████████████████████████▉| 1130/1131 [02:47<00:00, 6.73it/s] Traceback (most recent call last): File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 151, in main() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 132, in main merged = merge_models( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 146, in merge_models merged = rebasin_merge( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 286, in rebasin_merge thetas["model_a"] = simple_merge( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 244, in simple_merge res.result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 451, in result return self.get_result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 403, in get_result raise self._exception File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, *self.kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 342, in simple_merge_key with merge_key_context(key, thetas, args, kwargs) as result: File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter return next(self.gen) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 428, in merge_key_context result = merge_key(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 403, in merge_key merged_key = merge_method(merge_args).to(storage_device) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge_methods.py", line 27, in weighted_sum return (1 - alpha) a + alpha * b RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1

H:\Users\adamf\AI_Progs\sd-meh-merge\meh>merge_models.py -a H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\sd-v1-5-inpainting.ckpt -b H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster16_50prune.safetensors -m weighted_sum -p 32 -o H:\Users\adamf\AI_Progs\AI_Models\test\testggg -f safetensors -ba 0.5 -bb 0.5 -pr before loading models: 0.000 loading: H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\sd-v1-5-inpainting.ckpt loading: H:\Users\adamf\AI_Progs\AI_Models\test\00-regit-nametolongmaster16_50prune.safetensors models loaded: 0.000 stage 1: 100%|███████████████████████████████████████████████████████████████████▉| 1130/1131 [00:05<00:00, 206.46it/s] Traceback (most recent call last): File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 151, in main() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 132, in main merged = merge_models( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 162, in merge_models merged = simple_merge( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 244, in simple_merge res.result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 451, in result return self.get_result() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 403, in get_result raise self._exception File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, *self.kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 342, in simple_merge_key with merge_key_context(key, thetas, args, kwargs) as result: File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter return next(self.gen) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 428, in merge_key_context result = merge_key(args, kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 403, in merge_key merged_key = merge_method(merge_args).to(storage_device) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge_methods.py", line 27, in weighted_sum return (1 - alpha) a + alpha * b RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1

s1dlx commented 1 year ago

0.9.0 should fix this

DirtyHamster commented 1 year ago

Will check haven't had a chance to do so.. yet.

DirtyHamster commented 1 year ago

Part 1: The normal weighted sum worked as: merge_models.py -a H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\Reliberate-inpainting.safetensors -b H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\wd-ink-fp16.safetensors -m weighted_sum -p 16 -o H:\Users\adamf\AI_Progs\AI_Models\test\inpaintfp16test -f safetensors -ba 0.5 -bb 0.5 -pr

This now looks like it still have to test the file but want to run off the other pix2pix test first. Probably will not get to the file testing tonight though. I'll get through running off the pix2pix test though. Should I try reversing the models to see if it works as model b too?

Part 2: (Still get the re-basin error, If this is possible to fix it would be nice. If not at least the above works.) RB error: merge_models.py -a H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\Reliberate-inpainting.safetensors -b H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\wd-ink-fp16.safetensors -m weighted_sum -p 16 -o H:\Users\adamf\AI_Progs\AI_Models\test\inpaintfp16test -f safetensors -ba 0.5 -bb 0.5 -pr -rb -rbi 50

Starts and Returns:

INFO: Assembling alpha w&b INFO: base_alpha: 0.5 INFO: alpha weights: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5] INFO: Loading: H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\Reliberate-inpainting.safetensors INFO: Loading: H:\Users\adamf\AI_Progs\AI_Models\Stable_Diffusion\wd-ink-fp16.safetensors INFO: start merging with weighted_sum method INFO: Init rebasin iterations INFO: Rebasin iteration 0 stage 1: 34%|███████████████████████▋ | 388/1131 [00:02<00:04, 158.17it/s]model.diffusion_model.input_blocks.0.0.weight torch.Size([320, 9, 3, 3]) torch.Size([320, 4, 3, 3]) model.diffusion_model.input_blocks.1.1.proj_in.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.input_blocks.1.1.proj_out.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768]) torch.Size([320, 1024]) stage 1: 42%|████████████████████████████▋ | 470/1131 [00:03<00:05, 123.41it/s]model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768]) torch.Size([320, 1024]) model.diffusion_model.input_blocks.2.1.proj_in.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) stage 1: 46%|████████████████████████████████ | 518/1131 [00:05<00:09, 63.75it/s]model.diffusion_model.input_blocks.2.1.proj_out.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768]) torch.Size([320, 1024]) model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768]) torch.Size([320, 1024]) stage 1: 48%|█████████████████████████████████▉ | 548/1131 [00:05<00:08, 68.14it/s]model.diffusion_model.input_blocks.4.1.proj_in.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.input_blocks.4.1.proj_out.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768]) torch.Size([640, 1024]) model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 52%|████████████████████████████████████▌ | 590/1131 [00:06<00:08, 64.46it/s]model.diffusion_model.input_blocks.5.1.proj_in.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.input_blocks.5.1.proj_out.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 53%|█████████████████████████████████████▍ | 604/1131 [00:06<00:08, 65.33it/s]model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 55%|██████████████████████████████████████▋ | 626/1131 [00:07<00:09, 52.49it/s]model.diffusion_model.input_blocks.7.1.proj_in.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 56%|███████████████████████████████████████▏ | 634/1131 [00:07<00:12, 38.59it/s]model.diffusion_model.input_blocks.7.1.proj_out.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 57%|███████████████████████████████████████▌ | 640/1131 [00:08<00:14, 33.38it/s]model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 59%|█████████████████████████████████████████▏ | 665/1131 [00:10<00:32, 14.46it/s]model.diffusion_model.input_blocks.8.1.proj_in.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) model.diffusion_model.input_blocks.8.1.proj_out.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 60%|█████████████████████████████████████████▊ | 675/1131 [00:10<00:25, 17.71it/s]model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 60%|██████████████████████████████████████████ | 679/1131 [00:11<00:24, 18.77it/s]model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 62%|███████████████████████████████████████████▌ | 703/1131 [00:13<00:40, 10.56it/s]model.diffusion_model.middle_block.1.proj_in.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) model.diffusion_model.middle_block.1.proj_out.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 63%|████████████████████████████████████████████▏ | 713/1131 [00:13<00:26, 15.71it/s]model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 63%|████████████████████████████████████████████▎ | 716/1131 [00:13<00:24, 17.11it/s]model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 69%|████████████████████████████████████████████████ | 777/1131 [00:19<00:22, 15.71it/s]model.diffusion_model.output_blocks.10.1.proj_in.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.output_blocks.10.1.proj_out.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768]) torch.Size([320, 1024]) model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768]) torch.Size([320, 1024]) stage 1: 72%|██████████████████████████████████████████████████▌ | 817/1131 [00:19<00:06, 51.44it/s]model.diffusion_model.output_blocks.11.1.proj_in.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.output_blocks.11.1.proj_out.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768]) torch.Size([320, 1024]) model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768]) torch.Size([320, 1024]) stage 1: 77%|█████████████████████████████████████████████████████▉ | 871/1131 [00:23<00:20, 12.90it/s]model.diffusion_model.output_blocks.3.1.proj_in.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) model.diffusion_model.output_blocks.3.1.proj_out.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 78%|██████████████████████████████████████████████████████▍ | 879/1131 [00:23<00:16, 15.48it/s]model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 78%|██████████████████████████████████████████████████████▋ | 883/1131 [00:24<00:15, 15.66it/s]model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 80%|████████████████████████████████████████████████████████ | 905/1131 [00:26<00:23, 9.60it/s]model.diffusion_model.output_blocks.4.1.proj_in.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 81%|████████████████████████████████████████████████████████▍ | 911/1131 [00:26<00:17, 12.58it/s]model.diffusion_model.output_blocks.4.1.proj_out.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 81%|████████████████████████████████████████████████████████▊ | 917/1131 [00:27<00:13, 15.49it/s]model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 81%|█████████████████████████████████████████████████████████ | 921/1131 [00:27<00:11, 17.83it/s]model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 83%|██████████████████████████████████████████████████████████▎ | 943/1131 [00:29<00:16, 11.07it/s]model.diffusion_model.output_blocks.5.1.proj_in.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 84%|██████████████████████████████████████████████████████████▋ | 949/1131 [00:29<00:12, 14.29it/s]model.diffusion_model.output_blocks.5.1.proj_out.weight torch.Size([1280, 1280, 1, 1]) torch.Size([1280, 1280]) stage 1: 84%|███████████████████████████████████████████████████████████ | 955/1131 [00:29<00:10, 17.20it/s]model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 85%|███████████████████████████████████████████████████████████▎ | 959/1131 [00:29<00:08, 19.21it/s]model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768]) torch.Size([1280, 1024]) stage 1: 87%|████████████████████████████████████████████████████████████▊ | 983/1131 [00:32<00:11, 13.37it/s]model.diffusion_model.output_blocks.6.1.proj_in.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.output_blocks.6.1.proj_out.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) stage 1: 88%|█████████████████████████████████████████████████████████████▎ | 991/1131 [00:32<00:06, 20.31it/s]model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 88%|█████████████████████████████████████████████████████████████▊ | 999/1131 [00:32<00:04, 27.40it/s]model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 91%|██████████████████████████████████████████████████████████████▍ | 1024/1131 [00:32<00:03, 34.29it/s]model.diffusion_model.output_blocks.7.1.proj_in.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.output_blocks.7.1.proj_out.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) stage 1: 91%|███████████████████████████████████████████████████████████████ | 1034/1131 [00:32<00:02, 46.34it/s]model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768]) torch.Size([640, 1024]) model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 94%|████████████████████████████████████████████████████████████████▋ | 1061/1131 [00:33<00:01, 46.18it/s]model.diffusion_model.output_blocks.8.1.proj_in.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.output_blocks.8.1.proj_out.weight torch.Size([640, 640, 1, 1]) torch.Size([640, 640]) model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 95%|█████████████████████████████████████████████████████████████████▍ | 1073/1131 [00:33<00:00, 60.01it/s]model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768]) torch.Size([640, 1024]) stage 1: 97%|███████████████████████████████████████████████████████████████████ | 1099/1131 [00:34<00:00, 62.18it/s]model.diffusion_model.output_blocks.9.1.proj_in.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.output_blocks.9.1.proj_out.weight torch.Size([320, 320, 1, 1]) torch.Size([320, 320]) model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768]) torch.Size([320, 1024]) model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768]) torch.Size([320, 1024]) stage 1: 100%|█████████████████████████████████████████████████████████████████████| 1131/1131 [00:34<00:00, 32.94it/s] stage 2: 100%|██████████████████████████████████████████████████████████████████| 1215/1215 [00:00<00:00, 28256.61it/s]

Traceback (most recent call last): File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 181, in main() File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\adamf\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, **kwargs) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\merge_models.py", line 162, in main merged = merge_models( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 149, in merge_models merged = rebasin_merge( File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\merge.py", line 319, in rebasin_merge thetas["model_a"] = apply_permutation(perm_spec, perm_1, thetas["model_a"]) File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\rebasin.py", line 2197, in apply_permutation return {k: get_permuted_param(ps, perm, k, params) for k in params.keys()} File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\rebasin.py", line 2197, in return {k: get_permuted_param(ps, perm, k, params) for k in params.keys()} File "H:\Users\adamf\AI_Progs\sd-meh-merge\meh\sd_meh\rebasin.py", line 2183, in get_permuted_param for axis, p in enumerate(ps.axes_to_perm[k]): KeyError: 'cond_stage_model.model.ln_final.bias'