lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.44k stars 824 forks source link

[Bug]: Regional Prompter attention mode broken (2024-03-08) #515

Closed zaqhack closed 7 months ago

zaqhack commented 8 months ago

Checklist

What happened?

Not sure how to troubleshoot this further. It was working two days ago. Ran Forge, this morning and there were some updates. I installed it again in a clean directory, but it has the same issue. Using Regional Prompter, the settings don't "stick" to the UI, which makes me think it isn't passing the right parameter, anymore. When you click "Generate," it gives this familiar error:

Traceback (most recent call last):                                                         
  File "/opt/ui/forge-2024-03-08/modules/call_queue.py", line 57, in f                     
    res = list(func(*args, **kwargs))                                                      
TypeError: 'NoneType' object is not iterable

This seems to be true whether I am trying to use columns or a mask - it just doesn't care. Latent couple doesn't give the same error, but fails to separate the subjects. I'm 100% certain I used Regional Prompter just two days ago, and maybe even yesterday morning.

Odd UI behavior: https://youtu.be/lmXAcwo71s0

Steps to reproduce the problem

See also the linked video ...

  1. Load up Regional Prompter.
  2. Create a 4-column layout (1,1,1,1).
  3. Select "columns" and "attention."
  4. Create template.

Attention and Columns will no longer be selected ...

What should have happened?

They should still be selected. And, my hunch, they should still pass parameters to Regional Prompter. :-)

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome, Microsoft Edge

Sysinfo

sysinfo-2024-03-08-19-26.json

Console logs

Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on zaqhack user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /opt/ui/forge-2024-03-08/venv
################################################################

################################################################
Accelerating launch.py...
################################################################
glibc version is 2.35
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
[2024-03-08 11:39:00,783] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --administrator --enable-insecure-extension-access --listen --port 7860 --theme dark --no-download-sd-model --no-hashing --allow-code --api --xformers --opt-channelslast --opt-split-attention --no-half --pin-shared-memory --cuda-malloc --cuda-stream
Using cudaMallocAsync backend.
Total VRAM 24257 MB, total RAM 128741 MB
xformers version: 0.0.23.post1
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
VAE dtype: torch.bfloat16
CUDA Stream Activated:  True
Using xformers cross attention
ControlNet preprocessor location: /opt/ui/forge-2024-03-08/models/ControlNetPreprocessor
Loading weights [None] from /opt/ui/forge-2024-03-08/models/Stable-diffusion/0_sdxl/ponyDiffusionV6XL_v6StartWithThisOne.safetensors
2024-03-08 11:39:11,143 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 11.2s (prepare environment: 1.8s, import torch: 3.8s, import gradio: 0.7s, setup paths: 0.6s, other imports: 0.4s, load scripts: 2.0s, create ui: 0.8s, gradio launch: 0.2s, add APIs: 0.7s).
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  23750.842864990234
[Memory Management] Model Memory (MB) =  2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  20582.488166809082
Moving model(s) has taken 0.16 seconds
Model loaded in 4.5s (load weights from disk: 1.7s, forge load real models: 2.1s, calculate empty prompt: 0.6s).
1,1,1,1 0.3 Horizontal
Regional Prompter Active, Pos tokens : [57, 24, 32, 35, 65], Neg tokens : [15, 21, 27, 17, 11]
2024-03-08 11:39:44,357 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE
2024-03-08 11:39:44,359 - ControlNet - INFO - Using preprocessor: None
2024-03-08 11:39:44,359 - ControlNet - INFO - preprocessor resolution = 1024
[] []
2024-03-08 11:39:45,079 - ControlNet - INFO - Current ControlNet ControlNetPatcher: /opt/ui/forge-2024-03-08/models/ControlNet/OpenPoseXL2.safetensors
To load target model AutoencoderKL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  21838.87897491455
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  20655.321893692017
Moving model(s) has taken 0.02 seconds
[LORA] Loaded /opt/_models/models/Lora/Styles/Smooth Anime Style LoRA XL.safetensors for SDXL-UNet with 722 keys at weight 0.75 (skipped 0 keys)
[LORA] Loaded /opt/_models/models/Lora/Styles/Smooth Anime Style LoRA XL.safetensors for SDXL-CLIP with 264 keys at weight 0.75 (skipped 0 keys)
To load target model SDXLClipModel
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) =  21681.987573623657
[Memory Management] Model Memory (MB) =  0.0
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  20657.987573623657
Moving model(s) has taken 0.51 seconds
2024-03-08 11:39:47,496 - ControlNet - INFO - ControlNet Method None patched.
To load target model SDXL
To load target model ControlNet
Begin to load 2 models
[Memory Management] Current Free GPU Memory (MB) =  21675.481714248657
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  15754.395219802856
[Memory Management] Current Free GPU Memory (MB) =  16778.385454177856
[Memory Management] Model Memory (MB) =  2386.120147705078
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  13368.265306472778
Moving model(s) has taken 0.87 seconds
hook_forward.<locals>.forward() got an unexpected keyword argument 'transformer_options'

Additional information

Typically accessed over the network via --listen ... Windows/Mac browser to Linux backend.

Postmoderncaliban commented 8 months ago

Can confirm. Having regional prompter active causes TypeError: 'NoneType' object is not iterable, in attention mode. Latent seems to work fine.

Kaneda56 commented 8 months ago

Same error for me...

2shinrei commented 8 months ago

I have the same problem. Regional Prompting extension attention mode completely stopped working and latent doesn't separate the subjects. TypeError: 'NoneType' object is not iterable I'm on "29be1da" and reverted a few commits back because it worked a few days ago and on "b9705c5" it works again. So it seems one of the last three commits broke this functionality. @lllyasviel

Heather95 commented 8 months ago

I have the same problem. Regional Prompting extension attention mode completely stopped working and latent doesn't separate the subjects. TypeError: 'NoneType' object is not iterable I'm on "29be1da" and reverted a few commits back because it worked a few days ago and on "b9705c5" it works again. So it seems one of the last three commits broke this functionality. @lllyasviel

Same here. Regional Prompter has stopped working as you described in txt2img, img2img, anywhere.

I disabled every extension (including built-in) except for Regional Prompter and still received the error. When I disabled RP, I did not get an error. It was working a couple days ago. Also, no error in A1111.

Someone opened an issue on RP repository: https://github.com/hako-mikan/sd-webui-regional-prompter/issues/307

slashedstar commented 8 months ago

The one time in a month that I wanna use and it broke lol, guess I'll roll back

trueconnor commented 8 months ago

In my webui forge, RP and LC doesn't work at all, even for txt2img

sanora87 commented 8 months ago

roll back to "b9705c5"

I wish I knew how to do this. I tried using a youtube tutorial and just got errors for my troubles.

2shinrei commented 8 months ago

roll back to "b9705c5"

I wish I knew how to do this. I tried using a youtube tutorial and just got errors for my troubles.

in a terminal window navigate to your root installation folder of WebUI Forge (...\stable-diffusion-webui-forge)

type git reset --hard b9705c5 and hit enter.

type git log -1 and hit enter.

check the first seven digits/letters next to commit. do they match the ones you typed in earlier? if yes, the rollback was successfully

CRCODE22 commented 8 months ago

Same problem regional prompter does not work anymore when I updated forge today.

sanora87 commented 8 months ago

roll back to "b9705c5"

I wish I knew how to do this. I tried using a youtube tutorial and just got errors for my troubles.

in a terminal window navigate to your root installation folder of WebUI Forge (...\stable-diffusion-webui-forge)

type git reset --hard b9705c5 and hit enter.

type git log -1 and hit enter.

check the first seven digits/letters next to commit. do they match the ones you typed in earlier? if yes, the rollback was successfully

Wow. That easy? Thanks you're the best.

E2GO commented 8 months ago

Same error. Rolling back to b9705c5 commit fixed it. UPD: it break lora working for me... Needs fix for Forge

sandner-art commented 8 months ago

Does not work in txt2img either. Lets hope it will be fixed soon. Forge is great, it would be a pity to switch back to A1111.

catboxanon commented 8 months ago

Duplicate of https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/242

Prigodjin commented 8 months ago

Same problem! Do something please!

catboxanon commented 7 months ago

Re-opening since I realized this is actually a different issue. Will be fixed when https://github.com/hako-mikan/sd-webui-regional-prompter/pull/308 is merged.

catboxanon commented 7 months ago

Upstream PR merged. Update the Regional Prompter extension and it will work now.

t-Ghost-t commented 5 months ago

I believe I'm having this issue now, post update. Latent works, attention does not. All the same as above on Forge.

Console logs ``` Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 6665.920343399048 [Memory Management] Model Memory (MB) = 4897.086494445801 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 744.8338489532471 Moving model(s) has taken 1.45 seconds 0%| | 0/30 [00:00 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\sd_samplers_cfg_denoiser.py", line 182, in forward denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params, File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules_forge\forge_sampler.py", line 88, in forge_sample denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 289, in sampling_function cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\modules\model_base.py", line 90, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 55, in forward_timestep_embed x = layer(x, context, transformer_options) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 620, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 447, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\util.py", line 194, in checkpoint return func(*inputs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 547, in _forward n = self.attn2(n, context=context_attn2, value=value_attn2, transformer_options=extra_options) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\attention.py", line 429, in forward opx = masksepcalc(px, conp, mask, True, 2) File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\attention.py", line 309, in masksepcalc context = contexts[:,tll[i][0] * TOKENSCON:tll[i][1] * TOKENSCON,:] IndexError: list index out of range list index out of range *** Error completing request *** Arguments: ('task(oxa205sa499e3ys)', , '1girl, blue shirt, BREAK, 1boy, red shirt,', 'score_5, score_4, negativeXL_D,', [], 30, 'Euler a', 1, 1, 7, 1216, 832, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, 102476424, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, '', 0.5, True, False, '', 'Lerp', False, False, 8, True, False, 16, 'Median cut', 'None', True, False, 16, 'Median cut', 'None', True, False, False, 128, False, None, 16, 'None', True, False, 'Mask', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False, 'Use BREAK to change chunks'], '0', '0', '0.4', {'image': array([[[255, 255, 255], *** [255, 255, 255], *** [255, 255, 255], *** ..., *** [255, 255, 255], *** [255, 255, 255], *** [255, 255, 255]], *** *** [[255, 255, 255], *** [255, 255, 255], *** [255, 255, 255], *** ..., *** [255, 255, 255], *** [255, 255, 255], *** [255, 255, 255]], *** *** [[255, 255, 255], *** [255, 255, 255], *** [255, 255, 255], *** ..., *** [255, 255, 255], *** [255, 255, 255], *** [255, 255, 255]], *** *** ..., *** *** [[255, 255, 255], *** [255, 255, 255], *** [255, 255, 255], *** ..., *** [255, 255, 255], *** [255, 255, 255], *** [255, 255, 255]], *** *** [[255, 255, 255], *** [255, 255, 255], *** [255, 255, 255], *** ..., *** [255, 255, 255], *** [255, 255, 255], *** [255, 255, 255]], *** *** [[255, 255, 255], *** [255, 255, 255], *** [255, 255, 255], *** ..., *** [255, 255, 255], *** [255, 255, 255], *** [255, 255, 255]]], dtype=uint8), 'mask': array([[[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** ..., *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]]], dtype=uint8)}, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {} Traceback (most recent call last): File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable --- ```