Closed ostap667inbox closed 10 months ago
I guess it can mean nothing detected in the picture by the prompt. Can you try with different detection prompt? Maybe I should add more clear text of error in case nothing has been detected
I guess it can mean nothing detected in the picture by the prompt. Can you try with different detection prompt? Maybe I should add more clear text of error in case nothing has been detected
I've tried different images and different prompts. The error only occurs when any Inpainting model is selected in the WebUI. With other models the error does not appear, but the generation results are not good because the other models are not inpainting. Tangled circle :) Segment Anything extension works normally. The prompts on the same images work normally, masks are generated correctly.
Can you give me the picture, if it's not confidential. And which version of webui you use?
Last updated WebUI 1.6.1 Any image.
When I try a normal non-inpainting model, the extension works fine, but the result is unsatisfactory because the model is not designed for inpainting. For example: Model: realisticVisionV60B1_v60B1VAE Detection prompt: face Positve prompt: black woman face
Then I only change the model in the WebUI to Inpaint model. For example, to realisticVisionV60B1_v60B1InpaintingVAE. I don't change anything else. I hit Run and I get this in console:
MasksCreator restored from cache
35%|█████████████████████████████ | 7/20 [00:02<00:04, 2.71it/s]
[Replacer] Exception: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 2 for tensor number 1 in the list.
*** Error completing request
*** Arguments: ('face', 'black woman face', 'poor quality, low quality, low res', 0, <PIL.Image.Image image mode=RGBA size=768x768 at 0x25DC49BF460>, None, '', '', True, '', -1, 'DPM++ 2M SDE Karras', 20, 0.3, 35, 4, 'sam_vit_h_4b8939.pth', 'GroundingDINO_SwinT_OGC (694MB)', 5.5, 1, 20, 0, 768, 1, 768, 1) {}
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\stable-diffusion-webui\extensions\sd-webui-replacer\scripts\replacer_generate.py", line 263, in generate
generateSingle(image, gArgs, saveDir, "", save_to_dirs)
File "C:\stable-diffusion-webui\extensions\sd-webui-replacer\scripts\replacer_generate.py", line 126, in generateSingle
inpaint(image, gArgs, savePath, saveSuffix, save_to_dirs)
File "C:\stable-diffusion-webui\extensions\sd-webui-replacer\scripts\replacer_generate.py", line 80, in inpaint
processed = process_images(p)
File "C:\stable-diffusion-webui\modules\processing.py", line 732, in process_images
res = process_images_inner(p)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\stable-diffusion-webui\modules\processing.py", line 1528, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1337, in forward
xc = torch.cat([x] + c_concat, dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 2 for tensor number 1 in the list.
---
@ostap667inbox I can't reproduce this error, I copied all your generation settings. Maybe the problem is in your system. Can you send full console log, and which GPU you use. I know amd has big issues on windows
@ostap667inbox I can't reproduce this error, I copied all your generation settings. Maybe the problem is in your system. Can you send full console log, and which GPU you use. I know amd has big issues on windows
I'm using an Nvidia RTX 3060 12Gb Above I have given the full error log after clicking Run in the extension. Before that, there is only the usual WebUI startup log in the console:
Already up to date.
venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.1
Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
Launching Web UI with arguments: --enable-insecure-extension-access --allow-code --listen --theme=dark --xformers --medvram-sdxl --api --embeddings-dir=C:/stable-diffusion-webui/embeddings/ --ckpt-dir=F:/NEURAL/MODELS/ --lora-dir=F:/NEURAL/LORA/ --vae-dir=F:/NEURAL/VAE/ --opt-channelslast --opt-split-attention --autolaunch --update-check --update-all-extensions --opt-sdp-no-mem-attention
python_server_full_path: C:\stable-diffusion-webui\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server
Civitai Helper: Get Custom Model Folder
[-] ADetailer initialized. version: 23.11.1, num models: 14
2023-12-09 11:44:48,715 - ControlNet - INFO - ControlNet v1.1.422
ControlNet preprocessor location: C:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-12-09 11:44:48,999 - ControlNet - INFO - ControlNet v1.1.422
[sd-webui-freeu] Controlnet support: *enabled*
11:44:50 - ReActor - STATUS - Running v0.6.0-a1
Loading weights [121c7c0944] from F:/NEURAL/MODELS/SD15\Realistic Vision\realisticVisionV60B1_v60B1VAE.safetensors
Creating model from config: C:\stable-diffusion-webui\configs\v1-inference.yaml
Civitai Helper: Settings:
Civitai Helper: max_size_preview: True
Civitai Helper: skip_nsfw_preview: False
Civitai Helper: open_url_with_js: True
Civitai Helper: proxy:
Civitai Helper: use civitai api key: False
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
🤯 LobeTheme: Initializing...
Startup time: 53.8s (prepare environment: 34.9s, import torch: 4.2s, import gradio: 1.0s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.9s, setup codeformer: 0.3s, list SD models: 0.2s, load scripts: 8.0s, create ui: 1.8s, gradio launch: 0.7s, add APIs: 0.1s).
Applying attention optimization: xformers... done.
Model loaded in 10.3s (load weights from disk: 0.3s, create model: 1.1s, apply weights to model: 4.3s, load VAE: 0.2s, load textual inversion embeddings: 3.7s, calculate empty prompt: 0.7s).
I still can not reproduce. But I have different system.
It seems you have rubbish in your args:
--xformers --opt-channelslast --opt-split-attention --opt-sdp-no-mem-attention
They are different types of optimization. You should last only one of them. (maybe --xformers is the best for rtx).
Also I found this text about --opt-channelslast https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimizations:
--opt-channelslast Changes torch memory type for stable diffusion to channels last. Effects not closely studied.
Maybe it is problem
I still can not reproduce. But I have different system.
It seems you have rubbish in your args:
--xformers --opt-channelslast --opt-split-attention --opt-sdp-no-mem-attention
They are different types of optimization. You should last only one of them. (maybe --xformers is the best for rtx).
Also I found this text about --opt-channelslast https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimizations:
--opt-channelslast Changes torch memory type for stable diffusion to channels last. Effects not closely studied.
Maybe it is problem
That wouldn't be a problem. Mutually exclusive optimization types are ignored in the WebUI and the one selected in the settings is chosen. I have removed the unnecessary startup parameters, leaving only xformers:
Launching Web UI with arguments: --enable-insecure-extension-access --allow-code --listen --theme=dark --xformers --medvram-sdxl --api --embeddings-dir=C:/stable-diffusion-webui/embeddings/ --ckpt-dir=F:/NEURAL/MODELS/ --lora-dir=F:/NEURAL/LORA/ --vae-dir=F:/NEURAL/VAE/ --autolaunch --update-check --update-all-extensions
That didn't help, same problem.
I have now tested the extension working with about 30 models, of which 9 are inpaint type models. I confirm that this problem is only with inpaint models. Unfortunately, this is all I can do to help diagnose the problem.
Hm, so, idk. Maybe somebody else could help you. But does regular inpainting work for you?
Hm, so, idk. Maybe somebody else could help you. But does regular inpainting work for you?
Yes, inpaint models work fine in img2img. I have no idea why these particular models are causing this crash on my computer and why this bug can't be reproduced on another computer. If anyone else encounters a similar problem, I suggest they post here. Maybe there is some nuance that I didn't notice.
After running with default settings and prompts Error appears ONLY if any inpainting model is selected (Juggernaut Inpainting for example)