Closed Duemellon closed 1 year ago
Hello! Thank you for posting. I could not get the same error on my machine. Could you please disable all the other plugins and try again?
I was getting various Gradio errors for different extensions. After removing all those that threw up errors I got this.
venv "E:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.6.0 Commit hash: 5ef669de080814067961f28357256e8fe27544f4 Installing pyqt5 requirement for depthmap script Launching Web UI with arguments: no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. [-] ADetailer initialized. version: 23.9.2, num models: 14 2023-09-11 21:38:58,723 - ControlNet - INFO - ControlNet v1.1.410 ControlNet preprocessor location: E:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-09-11 21:38:59,283 - ControlNet - INFO - ControlNet v1.1.410 Loading weights [16c911ef6e] from E:\stable-diffusion-webui\models\Stable-diffusion\2dCreepyArtMonster_v11.safetensors Creating model from config: E:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 56.3s (prepare environment: 19.9s, import torch: 7.3s, import gradio: 4.8s, setup paths: 4.0s, initialize shared: 0.4s, other imports: 3.7s, setup codeformer: 0.5s, list SD models: 0.3s, load scripts: 10.5s, initialize extra networks: 0.2s, create ui: 3.8s, gradio launch: 1.1s).
Applying attention optimization: Doggettx... done.
INFO:scripts.iib.logger:gen_info_completed 0 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 1 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 2 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 3 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:img_update_func E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 4 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 5 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 6 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 7 E:\ComfyUI\output\ComfyUI01998.png
INFO:scripts.iib.logger:gen_info_completed 8 E:\ComfyUI\output\ComfyUI01998.png
Model loaded in 47.4s (load weights from disk: 4.5s, create model: 0.5s, apply weights to model: 15.8s, load textual inversion embeddings: 5.8s, calculate empty prompt: 20.4s).
0%| | 0/4 [00:01<?, ?it/s]
Error completing request
Arguments: ('task(56vhzb8x1npuw0a)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=768x960 at 0x20C988B58D0>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.16, 0, 960, 768, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000020C10B28220>, 11, False, '', 0.8, 3348343015, False, -1, 0, 0, 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, True, False, 0, -1, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 2048, 128, True, True, True, False, False, 0, 16, 8, 'animatediffMotion_v14.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020C10B29930>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020C10B2AB30>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020C10B2A0E0>, False, 'None', 20, ' Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8 Will upscale the image by the selected scale factor; use width and height sliders to set tile size Will upscale the image depending on the selected target size typeCFG Scale
should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '
You still have plugins loaded. Please disable the remaining plugins one by one and record whst is the last plugin that was disabled before the error ceased to exist.
All non-built-in extensions were disabled for this one. It went through the process but produced an empty side-by-side file. I then tried rerunning from that moment & A1111 did not respond. I'll run again with t2i & see if I get similar. This was i2i
venv "E:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.6.0 Commit hash: 5ef669de080814067961f28357256e8fe27544f4 Installing pyqt5 requirement for depthmap script Launching Web UI with arguments: no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. [-] ADetailer initialized. version: 23.9.2, num models: 14 2023-09-12 10:08:32,275 - ControlNet - INFO - ControlNet v1.1.410 ControlNet preprocessor location: E:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-09-12 10:08:33,191 - ControlNet - INFO - ControlNet v1.1.410 Loading weights [bc316906d1] from E:\stable-diffusion-webui\models\Stable-diffusion\Bastard_v4_LiveAction_Pruned.safetensors Creating model from config: E:\stable-diffusion-webui\configs\v1-inference.yaml Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 119.0s (prepare environment: 42.8s, import torch: 18.2s, import gradio: 8.6s, setup paths: 10.1s, initialize shared: 1.2s, other imports: 8.9s, setup codeformer: 1.1s, setup gfpgan: 0.1s, list SD models: 1.7s, load scripts: 20.9s, reload hypernetworks: 0.2s, create ui: 3.7s, gradio launch: 2.0s, app_started_callback: 0.1s).
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "E:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 247, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 273, in call
await super().call(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 149, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 76, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 289, in app
await dependant.call(**values)
File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 604, in join_queue
session_info = await asyncio.wait_for(
File "D:\Python\Python310\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\websockets.py", line 133, in receive_json
self._raise_on_disconnect(message)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\websockets.py", line 105, in _raise_on_disconnect
raise WebSocketDisconnect(message["code"])
starlette.websockets.WebSocketDisconnect: 1006
Applying attention optimization: Doggettx... done.
Model loaded in 55.2s (load weights from disk: 3.6s, create model: 2.0s, apply weights to model: 20.9s, apply half(): 13.6s, load textual inversion embeddings: 4.1s, calculate empty prompt: 10.8s).
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Installing pyqt5 requirement for depthmap script
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [bc316906d1] from E:\stable-diffusion-webui\models\Stable-diffusion\Bastard_v4_LiveAction_Pruned.safetensors
Creating model from config: E:\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 49.6s (prepare environment: 17.6s, import torch: 9.3s, import gradio: 4.9s, setup paths: 4.0s, initialize shared: 0.4s, other imports: 4.2s, setup codeformer: 0.5s, list SD models: 0.3s, load scripts: 7.0s, initialize extra networks: 0.2s, create ui: 0.8s, gradio launch: 0.6s).
Applying attention optimization: Doggettx... done.
Model loaded in 8.2s (load weights from disk: 1.1s, create model: 0.3s, apply weights to model: 2.8s, apply half(): 1.9s, load textual inversion embeddings: 0.3s, calculate empty prompt: 1.8s).
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 2.14it/s]
DepthMap v0.4.4 (cdbc6421)███████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.41it/s]
device: cuda
Loading model(s) ..
Loading model weights from ./models/midas/dpt_beit_large_384.pt
initialize network with normal
loading the model from ./models/pix2pix\latest_net_G.pth
Computing output(s) ..
0%| | 0/1 [00:00<?, ?it/s]wholeImage being processed in : 1344
Adjust factor is: 1.0
Selecting patches ...
Target resolution: (2688, 2688, 3)
Resulting depthmap resolution will be : (512, 512)
patches to process: 27
processing patch 0 / 26 | [ 0 0 475 475]
processing patch 1 / 26 | [ 18 73 439 439]
processing patch 2 / 26 | [ 82 27 421 421]
processing patch 3 / 26 | [ 0 0 366 366]
processing patch 4 / 26 | [ 0 55 366 366]
processing patch 5 / 26 | [ 0 110 366 366]
processing patch 6 / 26 | [ 55 0 366 366]
processing patch 7 / 26 | [110 0 366 366]
processing patch 8 / 26 | [ 18 183 329 329]
processing patch 9 / 26 | [ 73 183 329 329]
processing patch 10 / 26 | [ 0 0 256 256]
processing patch 11 / 26 | [ 0 55 256 256]
processing patch 12 / 26 | [ 0 110 256 256]
processing patch 13 / 26 | [ 0 165 256 256]
processing patch 14 / 26 | [ 0 219 256 256]
processing patch 15 / 26 | [ 55 0 256 256]
processing patch 16 / 26 | [110 0 256 256]
processing patch 17 / 26 | [ 18 293 219 219]
processing patch 18 / 26 | [ 73 293 219 219]
processing patch 19 / 26 | [128 293 219 219]
processing patch 20 / 26 | [ 0 55 146 146]
processing patch 21 / 26 | [ 0 110 146 146]
processing patch 22 / 26 | [ 0 165 146 146]
processing patch 23 / 26 | [ 0 219 146 146]
processing patch 24 / 26 | [ 55 0 146 146]
processing patch 25 / 26 | [110 0 146 146]
processing patch 26 / 26 | [165 0 146 146]
100%|███████████████████████████████████████████████████████████████████████████████████| 1/1 [07:24<00:00, 444.82s/it]
Computing output(s) done.
All done.
Total progress: 100%|███████████████████████████████████████████████████████████████████| 4/4 [07:52<00:00, 118.02s/it] Total progress: 100%|████████████████████████████████████████████████████████████████████| 4/4 [07:52<00:00, 6.41it/s]
t2i worked without a hitch multiple times:
venv "E:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.6.0 Commit hash: 5ef669de080814067961f28357256e8fe27544f4 Installing pyqt5 requirement for depthmap script Launching Web UI with arguments: no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Loading weights [bc316906d1] from E:\stable-diffusion-webui\models\Stable-diffusion\Bastard_v4_LiveAction_Pruned.safetensors Running on local URL: http://127.0.0.1:7860 Creating model from config: E:\stable-diffusion-webui\configs\v1-inference.yaml
To create a public link, set share=True
in launch()
.
Startup time: 77.3s (prepare environment: 28.9s, import torch: 12.9s, import gradio: 7.2s, setup paths: 6.0s, initialize shared: 0.6s, other imports: 6.0s, setup SD model: 0.1s, setup codeformer: 0.7s, setup gfpgan: 0.1s, list SD models: 1.7s, load scripts: 9.3s, initialize extra networks: 0.1s, create ui: 1.9s, gradio launch: 2.0s).
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "E:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 247, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 273, in call
await super().call(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 149, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 76, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "E:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 289, in app
await dependant.call(**values)
File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 604, in join_queue
session_info = await asyncio.wait_for(
File "D:\Python\Python310\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\websockets.py", line 133, in receive_json
self._raise_on_disconnect(message)
File "E:\stable-diffusion-webui\venv\lib\site-packages\starlette\websockets.py", line 105, in _raise_on_disconnect
raise WebSocketDisconnect(message["code"])
starlette.websockets.WebSocketDisconnect: 1006
Applying attention optimization: Doggettx... done.
Model loaded in 43.7s (load weights from disk: 3.3s, create model: 0.7s, apply weights to model: 16.7s, apply half(): 9.5s, load textual inversion embeddings: 3.2s, calculate empty prompt: 10.3s).
Reusing loaded model Bastard_v4_LiveAction_Pruned.safetensors [bc316906d1] to load c3_v110.safetensors
Calculating sha256 for E:\stable-diffusion-webui\models\Stable-diffusion\c3_v110.safetensors: d3f14f5eba9be3fb870dbf265cb7f15b59155584f61838fbb133aeb3b4f6c15f
Loading weights [d3f14f5eba] from E:\stable-diffusion-webui\models\Stable-diffusion\c3_v110.safetensors
Applying attention optimization: Doggettx... done.
Weights loaded in 50.7s (send model to cpu: 2.4s, calculate hash: 36.4s, load weights from disk: 0.3s, apply weights to model: 0.3s, move model to device: 11.2s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.66it/s]
DepthMap v0.4.4 (cdbc6421)█████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.82it/s]
device: cuda
Loading model(s) ..
Loading model weights from ./models/leres/res101.pth
initialize network with normal
loading the model from ./models/pix2pix\latest_net_G.pth
Computing output(s) ..
0%| | 0/1 [00:00<?, ?it/s]wholeImage being processed in : 1344
Adjust factor is: 1.0
Selecting patches ...
Target resolution: (2688, 2688, 3)
Resulting depthmap resolution will be : (512, 512)
patches to process: 26
processing patch 0 / 25 | [ 43 43 469 469]
processing patch 1 / 25 | [ 0 0 427 427]
processing patch 2 / 25 | [ 0 64 427 427]
processing patch 3 / 25 | [ 64 0 427 427]
processing patch 4 / 25 | [107 171 341 341]
processing patch 5 / 25 | [171 43 341 341]
processing patch 6 / 25 | [171 107 341 341]
processing patch 7 / 25 | [171 171 341 341]
processing patch 8 / 25 | [ 0 0 299 299]
processing patch 9 / 25 | [ 0 64 299 299]
processing patch 10 / 25 | [ 0 128 299 299]
processing patch 11 / 25 | [ 0 192 299 299]
processing patch 12 / 25 | [ 64 0 299 299]
processing patch 13 / 25 | [128 0 299 299]
processing patch 14 / 25 | [192 0 299 299]
processing patch 15 / 25 | [ 43 299 213 213]
processing patch 16 / 25 | [107 299 213 213]
processing patch 17 / 25 | [171 299 213 213]
processing patch 18 / 25 | [235 299 213 213]
processing patch 19 / 25 | [299 107 213 213]
processing patch 20 / 25 | [299 171 213 213]
processing patch 21 / 25 | [299 235 213 213]
processing patch 22 / 25 | [299 299 213 213]
processing patch 23 / 25 | [128 0 171 171]
processing patch 24 / 25 | [192 0 171 171]
processing patch 25 / 25 | [256 0 171 171]
100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:40<00:00, 40.14s/it]
Computing output(s) done.
All done.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:53<00:00, 2.67s/it] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:53<00:00, 5.82it/s]
Ran i2i with one of the images output from the previous t2i & it ran fine
I've reenabled the extensions I wanted & it seems to be working. I"m not sure what the problem was. It wants to behave now but sometimes it is really slow on i2i even for the same size dimension.
What happened?
img2img > GPU > Boost > Save Outputs > Gen stereocopic > L/R (anaglyph off), default settings various models (midas, dpt_beit, res101)
p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type ', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, *kwargs)) File "E:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(args, *kwargs) File "E:\stable-diffusion-webui\modules\img2img.py", line 206, in img2img processed = modules.scripts.scripts_img2img.run(p, args) File "E:\stable-diffusion-webui\modules\scripts.py", line 601, in run processed = script.run(p, *script_args) File "E:\stable-diffusion-webui\extensions\stable-diffusion-webui-depthmap-script\scripts\depthmap.py", line 34, in run inputs = GradioComponentBundle.enkey_to_dict(inputs) File "E:\stable-diffusion-webui\extensions\stable-diffusion-webui-depthmap-script\src\gradio_args_transport.py", line 86, in enkey_to_dict assert inp[-1].startswith("\u222F") IndexError: tuple index out of range
Steps to reproduce the problem
A1111 Add i2i item include DepthMap Script generate = error What should have happened?
generate stereoscopic view from i2i process Sysinfo
Device name Duemellon Processor AMD Ryzen 7 5700G with Radeon Graphics 3.80 GHz Installed RAM 16.0 GB (15.8 GB usable) Device ID FF8E163E-D697-4150-A920-97774908B70E Product ID 00342-20826-79719-AAOEM System type 64-bit operating system, x64-based processor Pen and touch Pen support RTX 3060 What browsers do you use to access the UI ?
Google Chrome Console logs
venv "E:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.6.0 Commit hash: 5ef669de080814067961f28357256e8fe27544f4 Installing SD-CN-Animation requirement: scikit-image Installing sd-webui-controlnet requirement: changing opencv-python version from 4.7.0.72 to 4.8.0 Checking roop requirements Install insightface==0.7.3 Installing sd-webui-roop requirement: insightface==0.7.3 Install onnx==1.14.0 Installing sd-webui-roop requirement: onnx==1.14.0 Install onnxruntime==1.15.0 Installing sd-webui-roop requirement: onnxruntime==1.15.0 Install opencv-python==4.7.0.72 Installing sd-webui-roop requirement: opencv-python==4.7.0.72 Installing pyqt5 requirement for depthmap script Launching Web UI with arguments: no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. [-] ADetailer initialized. version: 23.9.1, num models: 13 *** Error loading script: test_persistent.py Traceback (most recent call last): File "E:\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "E:\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\stable-diffusion-webui\extensions\animatediff-cli-prompt-travel\scripts\test_persistent.py", line 3, in
from animatediff import get_dir
ModuleNotFoundError: No module named 'animatediff'
[AddNet] Updating model hashes... 0it [00:00, ?it/s] [AddNet] Updating model hashes... 0it [00:00, ?it/s] 2023-09-08 12:39:23,097 - ControlNet - INFO - ControlNet v1.1.408 ControlNet preprocessor location: E:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-09-08 12:39:23,608 - ControlNet - INFO - ControlNet v1.1.408 *** Error loading script: m2m_ui.py Traceback (most recent call last): File "E:\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "E:\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\stable-diffusion-webui\extensions\sd-webui-mov2mov\scripts\m2m_ui.py", line 12, in
from modules.ui import paste_symbol, clear_prompt_symbol, extra_networks_symbol, apply_style_symbol, save_style_symbol, \
ImportError: cannot import name 'create_seed_inputs' from 'modules.ui' (E:\stable-diffusion-webui\modules\ui.py)
2023-09-08 12:39:25,338 - roop - INFO - roop v0.0.2 2023-09-08 12:39:25,422 - roop - INFO - roop v0.0.2 Loading weights [c4a3dfd218] from E:\stable-diffusion-webui\models\Stable-diffusion\truesight_v10.safetensors WARNING:py.warnings:E:\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:448: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. arc_calc_height = gr.Button(value="Calculate Height").style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:448: GradioDeprecationWarning: Use
scale
in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not. arc_calc_height = gr.Button(value="Calculate Height").style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:456: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. arc_calc_width = gr.Button(value="Calculate Width").style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:456: GradioDeprecationWarning: Use
scale
in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not. arc_calc_width = gr.Button(value="Calculate Width").style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions\sd-webui-roop\scripts\faceswap.py:38: GradioDeprecationWarning: Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components img = gr.inputs.Image(type="pil")
WARNING:py.warnings:E:\stable-diffusion-webui\modules\gradio_extensons.py:25: GradioDeprecationWarning:
optional
parameter is deprecated, and it has no effect res = original_IOComponent_init(self, *args, **kwargs)WARNING:py.warnings:E:\stable-diffusion-webui\extensions\sd-webui-roop\scripts\faceswap.py:55: GradioDeprecationWarning: Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components upscaler_name = gr.inputs.Dropdown(
WARNING:py.warnings:E:\stable-diffusion-webui\extensions\sd-webui-roop\scripts\faceswap.py:74: GradioDeprecationWarning: Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components model = gr.inputs.Dropdown(
WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:412: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. gr.Gallery(value=ResultDirectionImages, show_label=False).style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:412: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead. gr.Gallery(value=ResultDirectionImages, show_label=False).style(
Creating model from config: E:\stable-diffusion-webui\configs\v1-inference.yaml WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:416: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. gr.Gallery(value=ResultMoodImages, show_label=False).style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:416: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead. gr.Gallery(value=ResultMoodImages, show_label=False).style(
WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:420: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. gr.Gallery(value=ResultArtistImages, show_label=False).style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:420: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead. gr.Gallery(value=ResultArtistImages, show_label=False).style(
WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:424: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. gr.Gallery(value=ArtMovementImages, show_label=False).style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:424: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead. gr.Gallery(value=ArtMovementImages, show_label=False).style(
WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:428: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. gr.Gallery(value=ResultColorImages, show_label=False).style(WARNING:py.warnings:E:\stable-diffusion-webui\extensions\StylePile\scripts\StylePile.py:428: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead. gr.Gallery(value=ResultColorImages, show_label=False).style(
WARNING:py.warnings:E:\stable-diffusion-webui\extensions\Mask2Background\scripts\inpaint_anything.py:161: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. mask_out_image = gr.Image(label="Get mask image", elem_id="mask_out_image", type="numpy", interactive=False).style(height=480)WARNING:py.warnings:E:\stable-diffusion-webui\extensions\Mask2Background\scripts\inpaint_anything.py:168: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. sam_image = gr.Image(label="Fill the background image", elem_id="sam_image", type="numpy", tool="sketch", brush_radius=8,WARNING:py.warnings:E:\stable-diffusion-webui\extensions\Mask2Background\scripts\inpaint_anything.py:174: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. sel_mask = gr.Image(label="Create mask image", elem_id="sel_mask", type="numpy", tool="sketch", brush_radius=12,WARNING:py.warnings:E:\stable-diffusion-webui\extensions\SD-CN-Animation\scripts\base_ui.py:172: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. with gr.Row(elem_id='sdcn-core').style(equal_height=False, variant='compact'):WARNING:py.warnings:E:\stable-diffusion-webui\extensions\SD-CN-Animation\scripts\base_ui.py:194: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. img_preview_curr_frame = gr.Image(label='Current frame', elem_id=f"img_preview_curr_frame", type='pil').style(height=240)WARNING:py.warnings:E:\stable-diffusion-webui\extensions\SD-CN-Animation\scripts\base_ui.py:195: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. img_preview_curr_occl = gr.Image(label='Current occlusion', elem_id=f"img_preview_curr_occl", type='pil').style(height=240)WARNING:py.warnings:E:\stable-diffusion-webui\extensions\SD-CN-Animation\scripts\base_ui.py:197: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. img_preview_prev_warp = gr.Image(label='Previous frame warped', elem_id=f"img_preview_curr_frame", type='pil').style(height=240)WARNING:py.warnings:E:\stable-diffusion-webui\extensions\SD-CN-Animation\scripts\base_ui.py:198: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. img_preview_processed = gr.Image(label='Processed', elem_id=f"img_preview_processed", type='pil').style(height=240)WARNING:py.warnings:E:\stable-diffusion-webui\extensions\infinite-zoom-automatic1111-webui\iz_helpers\ui.py:253: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. output_video = gr.Video(label="Output").style(width=512, height=512)WARNING:py.warnings:E:\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:399: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. with gr.Row().style(equal_height=False):WARNING:py.warnings:E:\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:521: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. cover_image = gr.Image(WARNING:py.warnings:E:\stable-diffusion-webui\extensions\sd-webui-text2video\scripts\text2vid.py:48: GradioDeprecationWarning: The
style
method is deprecated. Please set these arguments in the constructor instead. with gr.Row(elem_id='t2v-core').style(equal_height=False, variant='compact'):Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
. Startup time: 110.5s (prepare environment: 30.9s, import torch: 6.7s, import gradio: 3.1s, setup paths: 3.5s, initialize shared: 0.7s, other imports: 3.7s, setup codeformer: 0.5s, setup gfpgan: 0.1s, list SD models: 0.2s, load scripts: 10.7s, initialize extra networks: 0.2s, create ui: 47.9s, gradio launch: 2.6s, app_started_callback: 0.2s). Loading VAE weights specified in settings: E:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors Applying attention optimization: Doggettx... done. Error completing request Arguments: ('task(6pznct1gdjqoyxr)', 'car on road', '', [], 20, 'Euler a', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000021AE870BF10>, 6, False, '', 0.8, -1, False, -1, 0, 0, 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 1024, 0, 15, 'R-ESRGAN 4x+', 'R-ESRGAN 4x+', 0.3, 0.1, '', '', 2, 'Noise sync (sharp)', 0, 0.05, 0, 'DPM++ 2M SDE', False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 2048, 128, True, True, True, False, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', False, 0, 16, 8, 'animatediffMotion_v14.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000021AE870A5F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000021AE87099F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000021AE8709F90>, False, 'None', 20, None, False, '0', 'E:\stable-diffusion-webui\models\roop\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', None, None, False, None, None, False, None, None, False, 50, True, False, 0, 'Range', 1, 'GPU', True, False, False, False, True, 0, 448, False, 448, False, False, 3, False, 3, True, 3, False, 'Horizontal', False, False, 'u2net', False, True, True, False, 0, 2.5, 'polylines_sharp', ['left-right'], 2, 0, '∯boost∯clipdepth∯clipdepth_far∯clipdepth_mode∯clipdepth_near∯compute_device∯do_output_depth∯gen_normalmap∯gen_rembg∯gen_simple_mesh∯gen_stereo∯model_type∯net_height∯net_size_match∯net_width∯normalmap_invert∯normalmap_post_blur∯normalmap_post_blur_kernel∯normalmap_pre_blur∯normalmap_pre_blur_kernel∯normalmap_sobel∯normalmap_sobel_kernel∯output_depth_combine∯output_depth_combine_axis∯output_depth_invert∯pre_depth_background_removal∯rembg_model∯save_background_removal_masks∯save_outputs∯simple_mesh_occlude∯simple_mesh_spherical∯stereo_balance∯stereo_divergence∯stereo_fill_algo∯stereo_modes∯stereo_offset_exponent∯stereo_separation') {} Traceback (most recent call last): File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, *kwargs)) File "E:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(args, *kwargs) File "E:\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img processed = modules.scripts.scripts_txt2img.run(p, args) File "E:\stable-diffusion-webui\modules\scripts.py", line 601, in run processed = script.run(p, *script_args) File "E:\stable-diffusion-webui\extensions\stable-diffusion-webui-depthmap-script\scripts\depthmap.py", line 34, in run inputs = GradioComponentBundle.enkey_to_dict(inputs) File "E:\stable-diffusion-webui\extensions\stable-diffusion-webui-depthmap-script\src\gradio_args_transport.py", line 86, in enkey_to_dict assert inp[-1].startswith("\u222F") IndexError: tuple index out of rangeModel loaded in 125.0s (load weights from disk: 3.1s, create model: 0.3s, apply weights to model: 89.5s, apply half(): 13.9s, load VAE: 5.3s, load textual inversion embeddings: 3.3s, calculate empty prompt: 9.4s).