Open kalle07 opened 1 month ago
openpose works photoID works reference dont work (console below) depth dont work (similar error: TypeError: 'NoneType' object is not iterable)
all errors in tab "img2img"
most works in txt2img
-
No response
win10 rtx4060 forge version: f0.0.17v1.8.0rc
--- 2024-05-31 15:17:13,385 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE 2024-05-31 15:17:13,385 - ControlNet - INFO - Using preprocessor: reference_only 2024-05-31 15:17:13,385 - ControlNet - INFO - preprocessor resolution = 0.5 2024-05-31 15:17:13,445 - ControlNet - INFO - Current ControlNet ControlModelPatcher: Not Needed 2024-05-31 15:17:14,020 - ControlNet - INFO - ControlNet Method reference_only patched. To load target model SDXL Begin to load 1 model Reuse 1 loaded models [Memory Management] Current Free GPU Memory (MB) = 4634.16552734375 [Memory Management] Model Memory (MB) = 0.0 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 3610.16552734375 Moving model(s) has taken 0.05 seconds 0%| | 0/16 [00:00<?, ?it/s] Traceback (most recent call last): File "E:\WebUI_Forge\webui\modules_forge\main_thread.py", line 37, in loop task.work() File "E:\WebUI_Forge\webui\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, **self.kwargs) File "E:\WebUI_Forge\webui\modules\img2img.py", line 236, in img2img_function processed = process_images(p) File "E:\WebUI_Forge\webui\modules\processing.py", line 752, in process_images res = process_images_inner(p) File "E:\WebUI_Forge\webui\modules\processing.py", line 922, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\WebUI_Forge\webui\modules\processing.py", line 1703, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "E:\WebUI_Forge\webui\modules\sd_samplers_kdiffusion.py", line 197, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "E:\WebUI_Forge\webui\modules\sd_samplers_common.py", line 263, in launch_sampling return func() File "E:\WebUI_Forge\webui\modules\sd_samplers_kdiffusion.py", line 197, in <lambda> samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\WebUI_Forge\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\WebUI_Forge\webui\modules\sd_samplers_cfg_denoiser.py", line 182, in forward denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params, File "E:\WebUI_Forge\webui\modules_forge\forge_sampler.py", line 88, in forge_sample denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed) File "E:\WebUI_Forge\webui\ldm_patched\modules\samplers.py", line 289, in sampling_function cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) File "E:\WebUI_Forge\webui\ldm_patched\modules\samplers.py", line 256, in calc_cond_uncond_batch output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\WebUI_Forge\webui\extensions-builtin\sd_forge_multidiffusion\lib_multidiffusion\tiled_diffusion.py", line 428, in __call__ x_tile_out = model_function(x_tile, ts_tile, **c_tile) File "E:\WebUI_Forge\webui\ldm_patched\modules\model_base.py", line 90, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\WebUI_Forge\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "E:\WebUI_Forge\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 55, in forward_timestep_embed x = layer(x, context, transformer_options) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\WebUI_Forge\webui\ldm_patched\ldm\modules\attention.py", line 620, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\WebUI_Forge\webui\ldm_patched\ldm\modules\attention.py", line 447, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "E:\WebUI_Forge\webui\ldm_patched\ldm\modules\diffusionmodules\util.py", line 194, in checkpoint return func(*inputs) File "E:\WebUI_Forge\webui\ldm_patched\ldm\modules\attention.py", line 504, in _forward n = attn1_replace_patch[block_attn1](n, context_attn1, value_attn1, extra_options) File "E:\WebUI_Forge\webui\extensions-builtin\forge_preprocessor_reference\scripts\forge_reference.py", line 172, in attn1_proc o_c = sdp(q_c, zero_cat(k_c, k_r, dim=1), zero_cat(v_c, v_r, dim=1), transformer_options) File "E:\WebUI_Forge\webui\extensions-builtin\forge_preprocessor_reference\scripts\forge_reference.py", line 29, in zero_cat return torch.cat([a, b], dim=dim) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 2 for tensor number 1 in the list. Sizes of tensors must match except in dimension 1. Expected size 1 but got size 2 for tensor number 1 in the list. *** Error completing request *** Arguments: ('task(mfhbttqrzlyemjk)', 0, 'boy sitting in the room', '', [], <PIL.Image.Image image mode=RGBA size=1280x720 at 0x17A11DF8910>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 3, 1.5, 0.78, 0.0, 640, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000017A51D56050>, 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=array([[[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [216, 214, 202], *** [214, 210, 199], *** [211, 207, 196]], *** *** [[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [219, 217, 205], *** [217, 213, 202], *** [214, 210, 199]], *** *** [[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [223, 221, 209], *** [221, 217, 206], *** [218, 214, 203]], *** *** ..., *** *** [[221, 204, 196], *** [218, 201, 193], *** [212, 195, 187], *** ..., *** [202, 179, 161], *** [218, 192, 177], *** [224, 196, 182]], *** *** [[217, 197, 190], *** [206, 186, 179], *** [188, 169, 162], *** ..., *** [224, 201, 183], *** [226, 200, 185], *** [205, 179, 164]], *** *** [[213, 190, 184], *** [197, 174, 168], *** [172, 152, 145], *** ..., *** [193, 170, 152], *** [173, 147, 132], *** [154, 128, 113]]], dtype=uint8), mask_image=None, hr_option='Both', enabled=True, module='reference_only', model='None', weight=1, image={'image': array([[[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [216, 214, 202], *** [214, 210, 199], *** [211, 207, 196]], *** *** [[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [219, 217, 205], *** [217, 213, 202], *** [214, 210, 199]], *** *** [[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [223, 221, 209], *** [221, 217, 206], *** [218, 214, 203]], *** *** ..., *** *** [[221, 204, 196], *** [218, 201, 193], *** [212, 195, 187], *** ..., *** [202, 179, 161], *** [218, 192, 177], *** [224, 196, 182]], *** *** [[217, 197, 190], *** [206, 186, 179], *** [188, 169, 162], *** ..., *** [224, 201, 183], *** [226, 200, 185], *** [205, 179, 164]], *** *** [[213, 190, 184], *** [197, 174, 168], *** [172, 152, 145], *** ..., *** [193, 170, 152], *** [173, 147, 132], *** [154, 128, 113]]], dtype=uint8), 'mask': array([[[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** ..., *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', processor_res=0.5, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='ControlNet is more important', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=array([[[ 9, 9, 9], *** [ 9, 9, 9], *** [ 9, 9, 9], *** ..., *** [ 34, 34, 34], *** [ 33, 33, 33], *** [ 32, 32, 32]], *** *** [[ 9, 9, 9], *** [ 9, 9, 9], *** [ 10, 10, 10], *** ..., *** [ 34, 34, 34], *** [ 33, 33, 33], *** [ 32, 32, 32]], *** *** [[ 10, 10, 10], *** [ 10, 10, 10], *** [ 10, 10, 10], *** ..., *** [ 35, 35, 35], *** [ 34, 34, 34], *** [ 33, 33, 33]], *** *** ..., *** *** [[248, 248, 248], *** [249, 249, 249], *** [249, 249, 249], *** ..., *** [195, 195, 195], *** [195, 195, 195], *** [195, 195, 195]], *** *** [[250, 250, 250], *** [250, 250, 250], *** [251, 251, 251], *** ..., *** [196, 196, 196], *** [196, 196, 196], *** [196, 196, 196]], *** *** [[250, 250, 250], *** [251, 251, 251], *** [251, 251, 251], *** ..., *** [197, 197, 197], *** [197, 197, 197], *** [197, 197, 197]]], dtype=uint8), mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='t2i-adapter_diffusers_xl_depth_midas [9c183166]', weight=1, image={'image': array([[[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [216, 214, 202], *** [214, 210, 199], *** [211, 207, 196]], *** *** [[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [219, 217, 205], *** [217, 213, 202], *** [214, 210, 199]], *** *** [[191, 173, 159], *** [191, 173, 159], *** [191, 173, 159], *** ..., *** [223, 221, 209], *** [221, 217, 206], *** [218, 214, 203]], *** *** ..., *** *** [[221, 204, 196], *** [218, 201, 193], *** [212, 195, 187], *** ..., *** [202, 179, 161], *** [218, 192, 177], *** [224, 196, 182]], *** *** [[217, 197, 190], *** [206, 186, 179], *** [188, 169, 162], *** ..., *** [224, 201, 183], *** [226, 200, 185], *** [205, 179, 164]], *** *** [[213, 190, 184], *** [197, 174, 168], *** [172, 152, 145], *** ..., *** [193, 170, 152], *** [173, 147, 132], *** [154, 128, 113]]], dtype=uint8), 'mask': array([[[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** ..., *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', True, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "E:\WebUI_Forge\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable ---
Hello ?!? BIG BUG
Checklist
What happened?
openpose works photoID works reference dont work (console below) depth dont work (similar error: TypeError: 'NoneType' object is not iterable)
Steps to reproduce the problem
all errors in tab "img2img"
most works in txt2img
What should have happened?
-
What browsers do you use to access the UI ?
No response
Sysinfo
win10 rtx4060 forge version: f0.0.17v1.8.0rc
Console logs
Additional information
No response