Closed yamanobe96 closed 1 year ago
I installed and started stable-diffusion-webui Hires. fix is enabled and batch count is set to 2 or more, an error occurs and generation stops
More than 2 images should be generated in a row.
version: v1.5.0 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.32.0 • checkpoint: 2336dbf342
Python 3.10.x
Windows
Nvidia GPUs (RTX 20 above)
Automatic
Google Chrome
@echo off set PYTHON=C:\Users\****\AppData\Local\Programs\Python\Python310\python.exe set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--xformers --opt-sdp-attention --autolaunch --theme dark git pull call webui.bat
Stable-Diffusion-Webui-Civitai-Helper https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper.git main 920ca326 Tue May 23 11:53:22 2023 unknown sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main e9679f8f Sun Jul 23 18:15:05 2023 unknown sd-webui-regional-prompter https://github.com/hako-mikan/sd-webui-regional-prompter.git main 0790e799 Sat Jul 22 15:26:12 2023 unknown stable-diffusion-webui-composable-lora https://github.com/a2569875/stable-diffusion-webui-composable-lora.git main e8f461f0 Wed Jun 28 09:02:27 2023 unknown stable-diffusion-webui-two-shot https://github.com/opparco/stable-diffusion-webui-two-shot main 9936c52e Sun Feb 19 08:40:41 2023 unknown LDSR built-in None Tue Jul 25 15:28:44 2023 Lora built-in None Tue Jul 25 15:28:44 2023 ScuNET built-in None Tue Jul 25 15:28:44 2023 SwinIR built-in None Tue Jul 25 15:28:44 2023 canvas-zoom-and-pan built-in None Tue Jul 25 15:28:44 2023 extra-options-section built-in None Tue Jul 25 15:28:44 2023 mobile built-in None Tue Jul 25 15:28:44 2023 prompt-bracket-checker built-in None Tue Jul 25 15:28:44 2023
Already up to date. venv "C:\AI\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.5.0 Commit hash: a3ddf464a2ed24c999f67ddfef7969f8291567be Launching Web UI with arguments: --xformers --opt-sdp-attention --autolaunch --theme dark Civitai Helper: Get Custom Model Folder Civitai Helper: Load setting from: C:\AI\stable-diffusion-webui\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json Civitai Helper: No setting file, use default 2023-07-26 00:26:42,007 - ControlNet - INFO - ControlNet v1.1.233 ControlNet preprocessor location: C:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-07-26 00:26:42,077 - ControlNet - INFO - ControlNet v1.1.233 reading lora C:\AI\stable-diffusion-webui\models\Lora\mast Lora\01キャラ\データ無し\xat.safetensors: UnicodeDecodeError Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\extensions-builtin\Lora\network.py", line 34, in __init__ self.metadata = cache.cached_data_for_file('safetensors-metadata', "lora/" + self.name, filename, read_metadata) File "C:\AI\stable-diffusion-webui\modules\cache.py", line 111, in cached_data_for_file value = func() File "C:\AI\stable-diffusion-webui\extensions-builtin\Lora\network.py", line 27, in read_metadata metadata = sd_models.read_metadata_from_safetensors(filename) File "C:\AI\stable-diffusion-webui\modules\sd_models.py", line 233, in read_metadata_from_safetensors json_obj = json.loads(json_data) File "C:\Users\kurod\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 341, in loads s = s.decode(detect_encoding(s), 'surrogatepass') UnicodeDecodeError: 'utf-8' codec can't decode byte 0x98 in position 116043: invalid start byte Loading weights [2336dbf342] from C:\AI\stable-diffusion-webui\models\Stable-diffusion\mast model\dreamshaper_631BakedVae.safetensors Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Creating model from config: C:\AI\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Startup time: 8.2s (launcher: 1.8s, import torch: 2.2s, import gradio: 0.6s, setup paths: 0.5s, other imports: 0.6s, list SD models: 0.6s, load scripts: 0.9s, create ui: 0.4s, gradio launch: 0.5s). Applying attention optimization: xformers... done. *** Error loading embedding anime.safetensors Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 192, in load_from_file assert len(data.keys()) == 1, 'embedding file has multiple terms in it' AssertionError: embedding file has multiple terms in it --- *** Error loading embedding aruruu_v10.pt Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 198, in load_from_file raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.") Exception: Couldn't identify aruruu_v10.pt as neither textual inversion embedding nor diffuser concept. --- *** Error loading embedding grapefruitVAE_v1.pt Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 198, in load_from_file raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.") Exception: Couldn't identify grapefruitVAE_v1.pt as neither textual inversion embedding nor diffuser concept. --- *** Error verifying pickled file from C:\AI\stable-diffusion-webui\embeddings\juju_v1.bin *** The file may be malicious, so the program is not going to read it. *** You can skip this check with --disable-safe-unpickle commandline argument. *** Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\safe.py", line 83, in check_pt with zipfile.ZipFile(filename) as z: File "C:\Users\kurod\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1267, in __init__ self._RealGetContents() File "C:\Users\kurod\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1334, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\safe.py", line 137, in load_with_extra check_pt(filename, extra_handler) File "C:\AI\stable-diffusion-webui\modules\safe.py", line 104, in check_pt unpickler.load() File "C:\Users\kurod\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load dispatch[key[0]](self) KeyError: 255 --- *** Error loading embedding juju_v1.bin Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 185, in load_from_file if 'string_to_param' in data: TypeError: argument of type 'NoneType' is not iterable --- *** Error loading embedding kyoudaSuzukaAnime_10.pt Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 198, in load_from_file raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.") Exception: Couldn't identify kyoudaSuzukaAnime_10.pt as neither textual inversion embedding nor diffuser concept. --- *** Error loading embedding maca_v10.pt Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 198, in load_from_file raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.") Exception: Couldn't identify maca_v10.pt as neither textual inversion embedding nor diffuser concept. --- *** Error loading embedding nemotoNagiNemonagiNiji_10.pt Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 198, in load_from_file raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.") Exception: Couldn't identify nemotoNagiNemonagiNiji_10.pt as neither textual inversion embedding nor diffuser concept. --- *** Error loading embedding tokisakiKurumiDateA_v10.pt Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 227, in load_from_dir self.load_from_file(fullfn, fn) File "C:\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 198, in load_from_file raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.") Exception: Couldn't identify tokisakiKurumiDateA_v10.pt as neither textual inversion embedding nor diffuser concept. --- Model loaded in 4.0s (load weights from disk: 0.9s, create model: 0.3s, apply weights to model: 0.7s, apply half(): 0.5s, move model to device: 0.9s, load textual inversion embeddings: 0.8s). Warning: Nonstandard height / width for ulscaled size Regional Prompter Active, Pos tokens : [2, 2, 3, 2], Neg tokens : [464] 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 7.39it/s] 100%|██████████████████████████████████████████████████████████████████████████████████| 15/15 [00:35<00:00, 2.37s/it] 0%| | 0/30 [00:00<?, ?it/s] *** Error completing request *** Arguments: ('task(n8wcngc1b97uu9e)', '(3girls) ADDCOMM\nsmile, ADDCOL\nlong hair, ADDCOL\nblack hair', '(bad_prompt_version2:0.8),bad-picture-chill-75v,(NG_DeepNegative_V1_75T), (easynegative),(badhandv4)(make up:1.5) (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), (sepia), (deformed iris, deformed pupils, semi-realistic, 3d, render, cg, painting, drawing, cartoon, anime, comic:0.6), watermark, bad_quality, long body, long neck, jpeg artifacts, (pubic hair:1.5), bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, ,signature, watermark, username, blurry, artist name, multiple legs, malformation,(thick lips)(mask:2),mutated fingers bad fingers missing fingers extra fingers liquid fingers poorly drawn fingers', [], 30, 16, False, False, 2, 1, 9.5, 3214313276.0, -1.0, 0, 0, 0, False, 768, 512, True, 0.7, 3.2, '4x-UltraMix_Balanced', 15, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000001E84A40BA00>, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001E8AE0055D0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001E84A3D7190>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001E84A3D76D0>, True, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, False, False, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\modules\call_queue.py", line 58, in f res = list(func(*args, **kwargs)) File "C:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "C:\AI\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img processed = processing.process_images(p) File "C:\AI\stable-diffusion-webui\modules\processing.py", line 673, in process_images res = process_images_inner(p) File "C:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "C:\AI\stable-diffusion-webui\modules\processing.py", line 793, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\AI\stable-diffusion-webui\modules\processing.py", line 1043, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling return func() File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 202, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b])) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "C:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "C:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "C:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward x = block(x, context=context[i]) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward x = self.attn2(self.norm2(x), context=context) + x File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\AI\stable-diffusion-webui\extensions\sd-webui-regional-prompter\scripts\attention.py", line 366, in forward ox = matsepcalc(x, contexts, mask, self.pn, 1) File "C:\AI\stable-diffusion-webui\extensions\sd-webui-regional-prompter\scripts\attention.py", line 174, in matsepcalc out = out.reshape(out.size()[0], dsh, dsw, out.size()[2]) # convert to main shape. RuntimeError: shape '[1, 77, 51, 320]' is invalid for input of size 1966080 ---
No response
Sorry for the long post.
Traceback indicates this is happening in the Regional Prompter extension. The error is not related to webui functionality.
Is there an existing issue for this?
What happened?
I installed and started stable-diffusion-webui Hires. fix is enabled and batch count is set to 2 or more, an error occurs and generation stops
Steps to reproduce the problem
What should have happened?
More than 2 images should be generated in a row.
Version or Commit where the problem happens
version: v1.5.0 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.32.0 • checkpoint: 2336dbf342
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
Stable-Diffusion-Webui-Civitai-Helper https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper.git main 920ca326 Tue May 23 11:53:22 2023 unknown sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main e9679f8f Sun Jul 23 18:15:05 2023 unknown sd-webui-regional-prompter https://github.com/hako-mikan/sd-webui-regional-prompter.git main 0790e799 Sat Jul 22 15:26:12 2023 unknown stable-diffusion-webui-composable-lora https://github.com/a2569875/stable-diffusion-webui-composable-lora.git main e8f461f0 Wed Jun 28 09:02:27 2023 unknown stable-diffusion-webui-two-shot https://github.com/opparco/stable-diffusion-webui-two-shot main 9936c52e Sun Feb 19 08:40:41 2023 unknown LDSR built-in None Tue Jul 25 15:28:44 2023
Lora built-in None Tue Jul 25 15:28:44 2023
ScuNET built-in None Tue Jul 25 15:28:44 2023
SwinIR built-in None Tue Jul 25 15:28:44 2023
canvas-zoom-and-pan built-in None Tue Jul 25 15:28:44 2023
extra-options-section built-in None Tue Jul 25 15:28:44 2023
mobile built-in None Tue Jul 25 15:28:44 2023
prompt-bracket-checker built-in None Tue Jul 25 15:28:44 2023
Console logs
Additional information
No response