pkuliyi2015 / multidiffusion-upscaler-for-automatic1111

Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
Other
4.59k stars 330 forks source link

Error When IP-Adapter is Enabled - AttributeError: 'dict' object has no attribute 'shape' #341

Open Vigilence opened 6 months ago

Vigilence commented 6 months ago

I first want to start with a big thank you for your extension! It is very useful and have come to enjoy the quality it provides with the few tests that I have completed.

I am currently using it and have noticed a bug that I would like to report.

I am using automatic1111

I can use tile diffusion, noise inversion, and tile vae fine with control net. However, if IP-Adapter is enabled for either sd 1.5 or XL I get the error AttributeError: 'dict' object has no attribute 'shape'.

Disabling IP-Adapter in control net resolves the issue.

IP-Adapter being used: Preprosser: ip-adapter_clip_sd15, Model: ip-adapter_sd15.

[Tiled Diffusion] upscaling image with 4x-UltraSharp...
tiled upscale: 100%|███████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 15.66it/s]
[Tiled Diffusion] ControlNet found, support is enabled.
2024-01-08 07:20:42,341 - ControlNet - INFO - unit_separate = False, style_align = False
2024-01-08 07:20:42,347 - ControlNet - INFO - Loading preprocessor: reference_adain+attn
2024-01-08 07:20:42,347 - ControlNet - INFO - preprocessor resolution = 1024
2024-01-08 07:20:42,373 - ControlNet - INFO - Loading model from cache: control_v11p_sd15_canny [d14c016b]
2024-01-08 07:20:42,374 - ControlNet - INFO - Loading preprocessor: canny
2024-01-08 07:20:42,374 - ControlNet - INFO - preprocessor resolution = 1024
2024-01-08 07:20:42,383 - ControlNet - INFO - Loading model from cache: control_v11f1e_sd15_tile [a371b31b]
2024-01-08 07:20:42,385 - ControlNet - INFO - Loading preprocessor: tile_resample
2024-01-08 07:20:42,385 - ControlNet - INFO - preprocessor resolution = 1024
2024-01-08 07:20:42,410 - ControlNet - INFO - ControlNet Hooked - Time = 0.07079553604125977
warn: noise inversion only supports the "Euler" sampler, switch to it sliently...
MixtureOfDiffusers Sampling: : 0it [00:00, ?it/s]Mixture of Diffusers hooked into 'Euler' sampler, Tile size: 96x96, Tile count: 6, Batch size: 3, Tile batches: 2 (ext: NoiseInv, ContrlNet)
[Tiled VAE]: the input size is tiny and unnecessary to tile.
*** Error running process_batch: I:\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py
    Traceback (most recent call last):
      File "I:\stable-diffusion-webui\modules\scripts.py", line 799, in process_batch
        script.process_batch(p, *script_args, **kwargs)
      File "I:\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 302, in process_batch
        if self.idx != sd_unet.current_unet.profile_idx:
    AttributeError: 'NoneType' object has no attribute 'profile_idx'

---
                                                                                                                       [Tiled VAE]: the input size is tiny and unnecessary to tile.                                     | 0/10 [00:00<?, ?it/s]
2024-01-08 07:20:43,824 - ControlNet - INFO - ControlNet used torch.float32 VAE to encode torch.Size([3, 4, 96, 96]).
MixtureOfDiffusers Sampling: : 0it [00:04, ?it/s]
[Tiled VAE]: the input size is tiny and unnecessary to tile.
2024-01-08 07:20:47,218 - ControlNet - INFO - ControlNet used torch.float32 VAE to encode torch.Size([3, 4, 96, 96]).
Noise Inversion: 100%|█████████████████████████████████████████████████████████████████| 10/10 [00:32<00:00,  3.29s/it]
  0%|                                                                                           | 0/12 [00:00<?, ?it/s][Tiled VAE]: the input size is tiny and unnecessary to tile.
2024-01-08 07:21:18,981 - ControlNet - INFO - ControlNet used torch.float32 VAE to encode torch.Size([6, 4, 96, 96]).
[Tiled VAE]: the input size is tiny and unnecessary to tile.
2024-01-08 07:22:47,300 - ControlNet - INFO - ControlNet used torch.float32 VAE to encode torch.Size([6, 4, 96, 96]).
100%|█████████████████████████████████████████████████████████████████████████████████| 12/12 [32:34<00:00, 162.88s/it]
[Tiled VAE]: the input size is tiny and unnecessary to tile.                          | 12/24 [27:58<28:50, 144.23s/it]
Total progress:  50%|████████████████████████████████▌                                | 12/24 [28:00<28:00, 140.03s/it]
{"prompt": "watercolor portrait of a cat wearing a hoodie, cotton fabric, bucket, mop, wood floor, masterpiece, high quality, 8k,  <lora:watercolor_v1:1>", "all_prompts": ["watercolor portrait of a cat wearing a hoodie, cotton fabric, bucket, mop, wood floor, masterpiece, high quality, 8k,  <lora:watercolor_v1:1>"], "negative_prompt": "bad-artist, easynegative", "all_negative_prompts": ["bad-artist, easynegative"], "seed": 4227621910, "all_seeds": [4227621910], "subseed": 4166226449, "all_subseeds": [4166226449], "subseed_strength": 0, "width": 1024, "height": 1536, "sampler_name": "Euler", "cfg_scale": 14, "steps": 45, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "absolutereality_v16", "sd_model_hash": "be1d90c4ab", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.25, "extra_generation_params": {"Tiled Diffusion upscaler": "4x-UltraSharp", "Tiled Diffusion scale factor": 2, "Tiled Diffusion": {"Method": "Mixture of Diffusers", "Tile tile width": 96, "Tile tile height": 96, "Tile Overlap": 48, "Tile batch size": 4, "Upscaler": "4x-UltraSharp", "Upscale factor": 2, "Keep input size": true, "NoiseInv": true, "NoiseInv Steps": 10, "NoiseInv Retouch": 1, "NoiseInv Renoise strength": 1, "NoiseInv Kernel size": 64}, "ControlNet 0": "Module: reference_adain+attn, Model: None, Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Threshold A: 0.5, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced, Hr Option: Both, Save Detected Map: True", "ControlNet 1": "Module: canny, Model: control_v11p_sd15_canny [d14c016b], Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Processor Res: 512, Threshold A: 100, Threshold B: 200, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced, Hr Option: Both, Save Detected Map: True", "ControlNet 2": "Module: tile_resample, Model: control_v11f1e_sd15_tile [a371b31b], Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Threshold A: 1, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced, Hr Option: Both, Save Detected Map: True", "Denoising strength": 0.25, "Lora hashes": "watercolor_v1: 94022ff2816f", "TI hashes": "bad-artist: 2d356134903e, easynegative: c74b4e810b03"}, "index_of_first_image": 0, "infotexts": ["watercolor portrait of a cat wearing a hoodie, cotton fabric, bucket, mop, wood floor, masterpiece, high quality, 8k,  <lora:watercolor_v1:1>\nNegative prompt: bad-artist, easynegative\nSteps: 45, Sampler: Euler, CFG scale: 14, Seed: 4227621910, Size: 1024x1536, Model hash: be1d90c4ab, Model: absolutereality_v16, Denoising strength: 0.25, Clip skip: 2, Tiled Diffusion upscaler: 4x-UltraSharp, Tiled Diffusion scale factor: 2, Tiled Diffusion: {\"Method\": \"Mixture of Diffusers\", \"Tile tile width\": 96, \"Tile tile height\": 96, \"Tile Overlap\": 48, \"Tile batch size\": 4, \"Upscaler\": \"4x-UltraSharp\", \"Upscale factor\": 2, \"Keep input size\": true, \"NoiseInv\": true, \"NoiseInv Steps\": 10, \"NoiseInv Retouch\": 1, \"NoiseInv Renoise strength\": 1, \"NoiseInv Kernel size\": 64}, ControlNet 0: \"Module: reference_adain+attn, Model: None, Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Threshold A: 0.5, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced, Hr Option: Both, Save Detected Map: True\", ControlNet 1: \"Module: canny, Model: control_v11p_sd15_canny [d14c016b], Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Processor Res: 512, Threshold A: 100, Threshold B: 200, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced, Hr Option: Both, Save Detected Map: True\", ControlNet 2: \"Module: tile_resample, Model: control_v11f1e_sd15_tile [a371b31b], Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Threshold A: 1, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced, Hr Option: Both, Save Detected Map: True\", Lora hashes: \"watercolor_v1: 94022ff2816f\", TI hashes: \"bad-artist: 2d356134903e, easynegative: c74b4e810b03\", Version: v1.7.0-311-g6869d958"], "styles": [], "job_timestamp": "20240108072040", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.7.0-311-g6869d958"}
[Tiled Diffusion] upscaling image with 4x-UltraSharp...
tiled upscale: 100%|███████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 14.67it/s]
[Tiled Diffusion] ControlNet found, support is enabled.
2024-01-08 08:02:45,685 - ControlNet - INFO - unit_separate = False, style_align = False
2024-01-08 08:02:45,686 - ControlNet - INFO - Loading model from cache: ip-adapter-full-face_sd15 [3459c5eb]
2024-01-08 08:02:45,686 - ControlNet - INFO - Loading preprocessor: ip-adapter_clip_sd15
2024-01-08 08:02:45,687 - ControlNet - INFO - preprocessor resolution = 1024
2024-01-08 08:02:45,688 - ControlNet - INFO - Loading model from cache: control_v11p_sd15_canny [d14c016b]
2024-01-08 08:02:45,689 - ControlNet - INFO - Loading preprocessor: canny
2024-01-08 08:02:45,689 - ControlNet - INFO - preprocessor resolution = 1024
2024-01-08 08:02:45,702 - ControlNet - INFO - Loading model from cache: control_v11f1e_sd15_tile [a371b31b]
2024-01-08 08:02:45,705 - ControlNet - INFO - Loading preprocessor: tile_resample
2024-01-08 08:02:45,706 - ControlNet - INFO - preprocessor resolution = 1024
2024-01-08 08:02:45,764 - ControlNet - INFO - ControlNet Hooked - Time = 0.08176636695861816
warn: noise inversion only supports the "Euler" sampler, switch to it sliently...
*** Error completing request
*** Arguments: ('task(i2bheo0a2oq2sfd)', 0, 'watercolor portrait of a cat wearing a hoodie, cotton fabric, bucket, mop, wood floor, masterpiece, high quality, 8k,  <lora:watercolor_v1:1>', 'bad-artist, easynegative', [], <PIL.Image.Image image mode=RGBA size=512x768 at 0x28100926110>, None, None, None, None, None, None, 45, 'DPM++ 3M SDE Karras', 4, 0, 1, 1, 1, 14, 1.5, 0.25, 0, 1536, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x00000281A2959AE0>, 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, 4227621910, False, -1, 0, 0, 0, True, 'Mixture of Diffusers', False, True, 1024, 1024, 96, 96, 48, 4, '4x-UltraSharp', 2, True, 10, 1, 1, 64, True, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 3072, 192, True, True, True, False, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', UiControlNetUnit(enabled=True, module='ip-adapter_clip_sd15', model='ip-adapter-full-face_sd15 [3459c5eb]', weight=1, image={'image': array([[[123,  85,  64],
***         [114,  76,  55],
***         [115,  77,  56],
***         ...,
***         [185, 177, 138],
***         [178, 170, 131],
***         [169, 150, 110]],
***
***        [[122,  86,  62],
***         [114,  76,  57],
***         [111,  76,  48],
***         ...,
***         [182, 174, 137],
***         [180, 171, 128],
***         [177, 169, 130]],
***
***        [[119,  87,  66],
***         [112,  75,  56],
***         [113,  78,  48],
***         ...,
***         [192, 178, 143],
***         [190, 176, 139],
***         [187, 179, 143]],
***
***        ...,
***
***        [[ 41,  33,  22],
***         [ 27,  21,   9],
***         [ 21,  13,  10],
***         ...,
***         [136,  88,  68],
***         [133,  85,  62],
***         [132,  85,  69]],
***
***        [[ 50,  41,  24],
***         [ 34,  28,   6],
***         [ 31,  22,  15],
***         ...,
***         [132,  87,  58],
***         [133,  91,  66],
***         [136,  86,  63]],
***
***        [[ 57,  45,  29],
***         [ 42,  31,   9],
***         [ 33,  24,   9],
***         ...,
***         [132,  90,  66],
***         [133,  90,  71],
***         [134,  85,  68]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=True, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=True, module='canny', model='control_v11p_sd15_canny [d14c016b]', weight=1, image={'image': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=True, processor_res=512, threshold_a=100, threshold_b=200, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=True, module='tile_resample', model='control_v11f1e_sd15_tile [a371b31b]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=True, processor_res=-1, threshold_a=1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, '', False, False, '', '', '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "I:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "I:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "I:\stable-diffusion-webui\modules\img2img.py", line 235, in img2img
        processed = process_images(p)
      File "I:\stable-diffusion-webui\modules\processing.py", line 782, in process_images
        res = process_images_inner(p)
      File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "I:\stable-diffusion-webui\modules\processing.py", line 852, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "I:\stable-diffusion-webui\modules\processing.py", line 1525, in init
        self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
      File "I:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tilediffusion.py", line 357, in <lambda>
        sd_samplers.create_sampler = lambda name, model: self.create_sampler_hijack(
      File "I:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tilediffusion.py", line 447, in create_sampler_hijack
        delegate.init_controlnet(self.controlnet_script, control_tensor_cpu)
      File "I:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
        return fn(*args, **kwargs)
      File "I:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py", line 464, in init_controlnet
        self.prepare_controlnet_tensors()
      File "I:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
        return fn(*args, **kwargs)
      File "I:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py", line 499, in prepare_controlnet_tensors
        if len(control_tensor.shape) == 3:
    AttributeError: 'dict' object has no attribute 'shape'
bastim94 commented 6 months ago

I just came here to report the same problem. As soon as i check ipadabter im getting the same error.

dill-shower commented 5 months ago

Same problem with another version of torch and without xformers. It's problem only with multidiffuision