AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.83k stars 27.05k forks source link

[Bug]: Inpainting - Masked only #9517

Open joeb7 opened 1 year ago

joeb7 commented 1 year ago

Is there an existing issue for this?

What happened?

Choose inpaint with " masked only " selected" Should return the image with the masked area being changed gives runtime error instead ( see below ) ends in "RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead"

Steps to reproduce the problem

  1. select image for inpainting
  2. select "masked area only"
  3. enter prompt and generate

What should have happened?

it should work with this select, i have used it before but using it yesterday and today it is no longer working.

Commit where the problem happens

commit: 22bcc7be

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--xformers

List of extensions

No; this problem was reproduced with the " no extensions " selected.

Console logs

Arguments: ('task(7kjc0xi46aee6w8)', 2, 'Soraka, normal clothes, tshirt, jeans', '', [], <PIL.Image.Image image mode=RGBA size=1000x590 at 0x2110123D6F0>, None, {'image': <PIL.Image.Image image mode=RGBA size=1000x590 at 0x2110123FA00>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1000x590 at 0x2110123D960>}, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 1, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 1, 32, 0, '', '', '', [], 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\img2img.py", line 172, in img2img
    processed = process_images(p)
  File "D:\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "D:\stable-diffusion-webui\modules\processing.py", line 698, in process_images_inner
    image = apply_overlay(image, p.paste_to, i, p.overlay_images)
  File "D:\stable-diffusion-webui\modules\processing.py", line 69, in apply_overlay
    image = images.resize_image(1, image, w, h)
  File "D:\stable-diffusion-webui\modules\images.py", line 287, in resize_image
    resized = resize(im, src_w, src_h)
  File "D:\stable-diffusion-webui\modules\images.py", line 270, in resize
    im = upscaler.scaler.upscale(im, scale, upscaler.data_path)
  File "D:\stable-diffusion-webui\modules\upscaler.py", line 63, in upscale
    img = self.do_upscale(img, selected_model)
  File "D:\stable-diffusion-webui\modules\esrgan_model.py", line 154, in do_upscale
    img = esrgan_upscale(model, img)
  File "D:\stable-diffusion-webui\modules\esrgan_model.py", line 225, in esrgan_upscale
    output = upscale_without_tiling(model, tile)
  File "D:\stable-diffusion-webui\modules\esrgan_model.py", line 204, in upscale_without_tiling
    output = model(img)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\stable-diffusion-webui\modules\esrgan_model_arch.py", line 62, in forward
    return self.model(feat)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 319, in lora_Conv2d_forward
    return torch.nn.Conv2d_forward_before_lora(self, input)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead

Additional information

No response

soyatlas commented 1 year ago

Me too.

Very Thanks for all PD.: sorry my bad english

RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead

Error completing request██████████████████████████████████████████████████████████████| 50/50 [00:11<00:00, 4.15it/s] Arguments: ('task(4lcjh4jax132ia5)', 3, '(intense #2888f0 cyan) dissolving to intense blue #7078e8 liquid, marblingai, water02, water elemental,\n,\n,\n,\n\n', 'woman, women, female, girl, (worst quality, low quality:2), monochrome, zombie, overexposure, watermark, text, bad anatomy,bad hand,extra hands,extra fingers,too many girl, female, woman, women, fingers,fused fingers,bad arm,distorted arm,extra arms,fused arms,extra legs,missing leg,disembodied leg,extra nipples, detached arm, liquid hand,inverted hand,disembodied limb, small breasts, loli, oversized head,extra body, extra navel,easynegative,(hair between eyes),sketch, duplicate, ugly, huge eyes, text, logo, worst face, (bad and mutated hands:1.3), (blurry:2.0), horror, geometry, bad_prompt, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), ((girl)), (deformed fingers:1.2), (long fingers:1.2),(bad-artist-anime), bad-artist, bad hand, extra legs ,(ng_deepnegative_v1_75t), 16-token-negative-deliberate-neg, bad-artist ,badhandsv5-neg , bad_prompt_version2, bad-picture-chill-75v, easynegative, learned_embeds , ng_deepnegative_v1_75t, verybadimagenegative_v1.3\n', [], <PIL.Image.Image image mode=RGBA size=512x768 at 0x1BBA6095060>, None, {'image': <PIL.Image.Image image mode=RGBA size=512x768 at 0x1BBA6094400>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=512x768 at 0x1BBA60942B0>}, <PIL.Image.Image image mode=RGBA size=614x921 at 0x1BBA60960E0>, <PIL.Image.Image image mode=RGBA size=614x921 at 0x1BB9C9E02E0>, None, None, 50, 18, 4, 0, 1, False, False, 1, 1, 30, 1.5, 0.63, 112770274.0, -1.0, 0, 0, 0, False, 0, 768, 512, 1, 0, 1, 32, 0, '', '', '', [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, 3, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <controlnet.py.UiControlNetUnit object at 0x000001BBCA3A8370>, <controlnet.py.UiControlNetUnit object at 0x000001BBA5A68850>, <controlnet.py.UiControlNetUnit object at 0x000001BBA5A6BF10>, '

\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 17, '1,2,3,4', ['anythingV3_fp16.ckpt [812cd9f9d9]', 'FINAL\aatrok_goodmodel.safetensors [bb4168e972]', 'FINAL\adobisRealFlexibleMix_v2.safetensors [9e8a2c7a5d]', 'FINAL\chilloutmix_NiPrunedFp32Fix.safetensors [fc2511737a]', 'FINAL\chillwithnai_v10.safetensors [cc2f08f0a3]', 'FINAL\clarity_2.safetensors [73ab0ffbb9]', 'FINAL\deliberate_v2.safetensors [9aba26abdf]', 'FINAL\DreamFul_v2.7.safetensors [9a2ca7b2d1]', 'FINAL\DucHaitenAIart-v4.5.3.safetensors [ce4d987a0e]', 'FINAL\edgeOfRealism_eorV20Fp16BakedVAE.safetensors [7f6146b8a9]', 'FINAL\FEMALE\aresMix_v02.safetensors [1422136240]', 'FINAL\FEMALE\lofi_V21.safetensors [a158dc2e8a]', 'FINAL\FEMALE\perfectdeliberate_V10.safetensors [9d8bb7df43]', 'FINAL\FEMALE\serassphotocentaurs_10.safetensors [dc1873db3b]', 'FINAL\halcyon_v20Bergamot.safetensors [4b0f91f7d3]', 'FINAL\homodiffusionGay_homodiffusionV20FP32.safetensors [af4f527b17]', 'FINAL\homodiffusionGay_homodiffusionXMMM.safetensors [4b96759a3f]', 'FINAL\homoerotic_v2.safetensors [b656369cf7]', 'FINAL\icbinpICantBelieveIts_v6.safetensors [ac34765554]', 'FINAL\lehinamodel_v11.safetensors [cbdf30ac14]', 'FINAL\liberty_main.safetensors [8634d80dec]', 'FINAL\moonmix_fantasy20.safetensors [7458ea4104]', 'FINAL\neurogenV11_v11.safetensors [7ce537a0b7]', 'FINAL\orusium_v10.safetensors [e9833bc61b]', 'FINAL\plazmMen_plazmV10.ckpt [5ed3269242]', 'FINAL\povSkinTexture_v2.safetensors [3d98aa9feb]'], 18, '0.5,0.6,0.7', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "I:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "I:\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "I:\stable-diffusion-webui\modules\img2img.py", line 178, in img2img processed = process_images(p) File "I:\stable-diffusion-webui\modules\processing.py", line 611, in process_images res = process_images_inner(p) File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "I:\stable-diffusion-webui\modules\processing.py", line 776, in process_images_inner image = apply_overlay(image, p.paste_to, i, p.overlay_images) File "I:\stable-diffusion-webui\modules\processing.py", line 70, in apply_overlay image = images.resize_image(1, image, w, h) File "I:\stable-diffusion-webui\modules\images.py", line 288, in resize_image resized = resize(im, src_w, src_h) File "I:\stable-diffusion-webui\modules\images.py", line 271, in resize im = upscaler.scaler.upscale(im, scale, upscaler.data_path) File "I:\stable-diffusion-webui\modules\upscaler.py", line 62, in upscale img = self.do_upscale(img, selected_model) File "I:\stable-diffusion-webui\modules\esrgan_model.py", line 150, in do_upscale img = esrgan_upscale(model, img) File "I:\stable-diffusion-webui\modules\esrgan_model.py", line 224, in esrgan_upscale output = upscale_without_tiling(model, tile) File "I:\stable-diffusion-webui\modules\esrgan_model.py", line 203, in upscale_without_tiling output = model(img) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "I:\stable-diffusion-webui\modules\esrgan_model_arch.py", line 61, in forward return self.model(feat) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "I:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 415, in lora_Conv2d_forward return torch.nn.Conv2d_forward_before_lora(self, input) File "I:\stable-diffusion-webui\extensions\a1111-sd-webui-lycoris\lycoris.py", line 746, in lyco_Conv2d_forward return torch.nn.Conv2d_forward_before_lyco(self, input) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead

Loading weights [bb4168e972] from I:\stable-diffusion-webui\models\Stable-diffusion\FINAL\aatrok_goodmodel.safetensors Loading VAE weights specified in settings: I:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt Applying optimization: xformers... done. Weights loaded in 1.2s (load weights from disk: 0.2s, apply weights to model: 0.8s, load VAE: 0.2s). Error loading hypernetwork I:\stable-diffusion-webui\models\hypernetworks\water02.pt Traceback (most recent call last): File "I:\stable-diffusion-webui\modules\hypernetworks\hypernetwork.py", line 331, in load_hypernetwork hypernetwork.load(path) File "I:\stable-diffusion-webui\modules\hypernetworks\hypernetwork.py", line 250, in load self.layer_structure = state_dict.get('layer_structure', [1, 2, 1]) AttributeError: 'Tensor' object has no attribute 'get'

100%|█████████████████████████████████████████████████████████████████████████████████| 50/50 [00:14<00:00, 3.51it/s] Error completing request1:06, 4.15it/s] Arguments: ('task(1y6ozbp08gv9xdp)', 3, '(intense #2888f0 cyan) dissolving to intense blue #7078e8 liquid, marblingai, water02, water elemental,\n,\n,\n,\n\n', 'woman, women, female, girl, (worst quality, low quality:2), monochrome, zombie, overexposure, watermark, text, bad anatomy,bad hand,extra hands,extra fingers,too many girl, female, woman, women, fingers,fused fingers,bad arm,distorted arm,extra arms,fused arms,extra legs,missing leg,disembodied leg,extra nipples, detached arm, liquid hand,inverted hand,disembodied limb, small breasts, loli, oversized head,extra body, extra navel,easynegative,(hair between eyes),sketch, duplicate, ugly, huge eyes, text, logo, worst face, (bad and mutated hands:1.3), (blurry:2.0), horror, geometry, bad_prompt, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), ((girl)), (deformed fingers:1.2), (long fingers:1.2),(bad-artist-anime), bad-artist, bad hand, extra legs ,(ng_deepnegative_v1_75t), 16-token-negative-deliberate-neg, bad-artist ,badhandsv5-neg , bad_prompt_version2, bad-picture-chill-75v, easynegative, learned_embeds , ng_deepnegative_v1_75t, verybadimagenegative_v1.3\n', [], <PIL.Image.Image image mode=RGBA size=512x768 at 0x1BB9C0F68F0>, None, {'image': <PIL.Image.Image image mode=RGBA size=512x768 at 0x1BBA6097370>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=512x768 at 0x1BBA6096D70>}, <PIL.Image.Image image mode=RGBA size=614x921 at 0x1BBA6095060>, <PIL.Image.Image image mode=RGBA size=614x921 at 0x1BB9C9E02E0>, None, None, 50, 18, 4, 0, 1, False, False, 1, 1, 30, 1.5, 0.63, 112770274.0, -1.0, 0, 0, 0, False, 0, 768, 512, 1, 0, 1, 32, 0, '', '', '', [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, 3, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <controlnet.py.UiControlNetUnit object at 0x000001BBDFC56140>, <controlnet.py.UiControlNetUnit object at 0x000001BBF6787FD0>, <controlnet.py.UiControlNetUnit object at 0x000001BBF6786590>, '

\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 17, '1,2,3,4', ['anythingV3_fp16.ckpt [812cd9f9d9]', 'FINAL\aatrok_goodmodel.safetensors [bb4168e972]', 'FINAL\adobisRealFlexibleMix_v2.safetensors [9e8a2c7a5d]', 'FINAL\chilloutmix_NiPrunedFp32Fix.safetensors [fc2511737a]', 'FINAL\chillwithnai_v10.safetensors [cc2f08f0a3]', 'FINAL\clarity_2.safetensors [73ab0ffbb9]', 'FINAL\deliberate_v2.safetensors [9aba26abdf]', 'FINAL\DreamFul_v2.7.safetensors [9a2ca7b2d1]', 'FINAL\DucHaitenAIart-v4.5.3.safetensors [ce4d987a0e]', 'FINAL\edgeOfRealism_eorV20Fp16BakedVAE.safetensors [7f6146b8a9]', 'FINAL\FEMALE\aresMix_v02.safetensors [1422136240]', 'FINAL\FEMALE\lofi_V21.safetensors [a158dc2e8a]', 'FINAL\FEMALE\perfectdeliberate_V10.safetensors [9d8bb7df43]', 'FINAL\FEMALE\serassphotocentaurs_10.safetensors [dc1873db3b]', 'FINAL\halcyon_v20Bergamot.safetensors [4b0f91f7d3]', 'FINAL\homodiffusionGay_homodiffusionV20FP32.safetensors [af4f527b17]', 'FINAL\homodiffusionGay_homodiffusionXMMM.safetensors [4b96759a3f]', 'FINAL\homoerotic_v2.safetensors [b656369cf7]', 'FINAL\icbinpICantBelieveIts_v6.safetensors [ac34765554]', 'FINAL\lehinamodel_v11.safetensors [cbdf30ac14]', 'FINAL\liberty_main.safetensors [8634d80dec]', 'FINAL\moonmix_fantasy20.safetensors [7458ea4104]', 'FINAL\neurogenV11_v11.safetensors [7ce537a0b7]', 'FINAL\orusium_v10.safetensors [e9833bc61b]', 'FINAL\plazmMen_plazmV10.ckpt [5ed3269242]', 'FINAL\povSkinTexture_v2.safetensors [3d98aa9feb]'], 18, '0.5,0.6,0.7', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "I:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "I:\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "I:\stable-diffusion-webui\modules\img2img.py", line 178, in img2img processed = process_images(p) File "I:\stable-diffusion-webui\modules\processing.py", line 611, in process_images res = process_images_inner(p) File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "I:\stable-diffusion-webui\modules\processing.py", line 776, in process_images_inner image = apply_overlay(image, p.paste_to, i, p.overlay_images) File "I:\stable-diffusion-webui\modules\processing.py", line 70, in apply_overlay image = images.resize_image(1, image, w, h) File "I:\stable-diffusion-webui\modules\images.py", line 288, in resize_image resized = resize(im, src_w, src_h) File "I:\stable-diffusion-webui\modules\images.py", line 271, in resize im = upscaler.scaler.upscale(im, scale, upscaler.data_path) File "I:\stable-diffusion-webui\modules\upscaler.py", line 62, in upscale img = self.do_upscale(img, selected_model) File "I:\stable-diffusion-webui\modules\esrgan_model.py", line 150, in do_upscale img = esrgan_upscale(model, img) File "I:\stable-diffusion-webui\modules\esrgan_model.py", line 224, in esrgan_upscale output = upscale_without_tiling(model, tile) File "I:\stable-diffusion-webui\modules\esrgan_model.py", line 203, in upscale_without_tiling output = model(img) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "I:\stable-diffusion-webui\modules\esrgan_model_arch.py", line 61, in forward return self.model(feat) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "I:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 415, in lora_Conv2d_forward return torch.nn.Conv2d_forward_before_lora(self, input) File "I:\stable-diffusion-webui\extensions\a1111-sd-webui-lycoris\lycoris.py", line 746, in lyco_Conv2d_forward return torch.nn.Conv2d_forward_before_lyco(self, input) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead

AlexYez commented 1 year ago

Confirmed, same error with "masked area only" Steps to reproduce the problem

RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead"