lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.17k stars 804 forks source link

[Bug]: NeverOOM sometimes crashes the generation #484

Open aolko opened 7 months ago

aolko commented 7 months ago

Checklist

What happened?

NeverOOM options result in either RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! or TypeError: 'NoneType' object is not iterable

Steps to reproduce the problem

  1. Enable All/One of NeverOOM options (especially tiled vae)
  2. Watch the generation crash
  3. With disabled tiled vae try generating an XYZ grid with model changes

What should have happened?

Generation shouldn't crash

What browsers do you use to access the UI ?

Google Chrome, Microsoft Edge

Sysinfo

sysinfo-2024-03-04-12-41.json

Console logs

https://paste.ee/p/DFdks

Additional information

No response

CCpt5 commented 7 months ago

I've experienced the second error, "'NoneType' object is not iterable", when using an extension called Model Mixer (merges models). I found a workaround is to hit the button to refresh the list of models on the upper left quicklist. After I refresh the model list Forge can generate again.

I'm not sure if that'll help you until this is looked at and maybe fixed (if there's a universal issue), but something to try in the meantime if you get stuck.

deceani commented 7 months ago

i get the same error if i use certain sizes, for example it works if is 1024x1024px, or 1024x576px, but i get that error if i use 1200x672, or 912x512 or 1024x816

aolko commented 7 months ago

I've experienced the second error, "'NoneType' object is not iterable", when using an extension called Model Mixer (merges models). I found a workaround is to hit the button to refresh the list of models on the upper left quicklist. After I refresh the model list Forge can generate again.

I'm not sure if that'll help you until this is looked at and maybe fixed (if there's a universal issue), but something to try in the meantime if you get stuck.

Your trick doesn't work

Trace

``` 2024-03-05 13:41:35,132 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE 2024-03-05 13:41:35,197 - ControlNet - INFO - Using preprocessor: openpose_full 2024-03-05 13:41:35,197 - ControlNet - INFO - preprocessor resolution = 664 Automatic Memory Management: 3 Modules in 0.26 seconds. 2024-03-05 13:41:50,160 - ControlNet - INFO - Current ControlNet ControlNetPatcher: D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\models\ControlNet\ControlNet\controlnets-ext\SDXL\t2i-adapter_xl_openpose.safetensors 2024-03-05 13:41:50,161 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE 2024-03-05 13:41:50,229 - ControlNet - INFO - Using preprocessor: depth_midas 2024-03-05 13:41:50,229 - ControlNet - INFO - preprocessor resolution = 664 Automatic Memory Management: 11 Modules in 0.35 seconds. 2024-03-05 13:41:58,449 - ControlNet - INFO - Current ControlNet ControlNetPatcher: D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\models\ControlNet\ControlNet\controlnets-ext\SDXL\diffusers_xl_depth_full.safetensors NeverOOM Enabled for UNet (always maximize offload) NeverOOM Enabled for VAE (always tiled) To load target model AutoencoderKL Begin to load 1 model [Memory Management] Requested SYNC Preserved Memory (MB) = 0.0 [Memory Management] Parameters Loaded to SYNC Stream (MB) = 319.11416244506836 [Memory Management] Parameters Loaded to GPU (MB) = 0.0 Moving model(s) has taken 0.01 seconds VAE tiled encode: 20%|██ | 4/20 [00:14<00:59, 3.69s/it]Traceback (most recent call last): File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules_forge\main_thread.py", line 37, in loop task.work() File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, **self.kwargs) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules\img2img.py", line 236, in img2img_function processed = process_images(p) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules\processing.py", line 752, in process_images res = process_images_inner(p) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules\processing.py", line 820, in process_images_inner p.init(p.all_prompts, p.all_seeds, p.all_subseeds) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules\processing.py", line 1653, in init self.init_latent = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules\sd_samplers_common.py", line 107, in images_tensor_to_samples x_latent = model.get_first_stage_encoding(model.encode_first_stage(image)) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules_forge\forge_loader.py", line 244, in patched_encode_first_stage sample = sd_model.forge_objects.vae.encode(x.movedim(1, -1) * 0.5 + 0.5) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\ldm_patched\modules\sd.py", line 320, in encode return self.encode_inner(pixel_samples) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\ldm_patched\modules\sd.py", line 297, in encode_inner return self.encode_tiled(pixel_samples) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\ldm_patched\modules\sd.py", line 327, in encode_tiled samples = self.encode_tiled_(pixel_samples, tile_x=tile_x, tile_y=tile_y, overlap=overlap) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\ldm_patched\modules\sd.py", line 255, in encode_tiled_ samples = ldm_patched.modules.utils.tiled_scale(pixel_samples, encode_fn, tile_x, tile_y, overlap, upscale_amount = (1/self.downscale_ratio), out_channels=self.latent_channels, output_device=self.output_device, pbar=pbar) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\ldm_patched\modules\utils.py", line 433, in tiled_scale out[:,:,round(y*upscale_amount):round((y+tile_y)*upscale_amount),round(x*upscale_amount):round((x+tile_x)*upscale_amount)] += ps * mask RuntimeError: The size of tensor a (13) must match the size of tensor b (12) at non-singleton dimension 2 The size of tensor a (13) must match the size of tensor b (12) at non-singleton dimension 2 VAE tiled encode: 20%|██ | 4/20 [00:15<01:01, 3.86s/it] *** Error completing request *** Arguments: ('task(enmsaivfq2toe9k)', 0, 'realistic photo of 1girl, real, hyperrealistic, a woman posing naked in a locker room, sweat, teal hair, teal eyes, curvy, huge breasts, perky nipples, long hair, looking at viewer, navel, parted lips, sitting, tank top, thick thighs, thighs, wet', '(((anime, manga, cartoon, painting, drawing, sketch, illustration, render, CG, 3d, asian))), (((big nose, big eyes, small breasts, small tits, small boobs, flat chest, natural breasts, natural tits, natural boobs, saggy breasts, saggy tits, no nipples, missing nipples, areolas, areolae))), (watermark, signature, label)', [], , None, None, None, None, None, None, 20, 'Euler a', 4, 0, 1, 1, 1, 8.5, 1.5, 0.5, 1.0, 998, 665, 0.65, 0, 0, 32, 0, '', '', '', [], False, [], '', , 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '1girl, girl', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '1girl, girl', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=True, module='openpose_full', model='t2i-adapter_xl_openpose [18cb12c1]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=664, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=True, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=0.4, image=None, resize_mode='Crop and Resize', processor_res=664, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, True, 128, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, True, True, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "D:\Programs\StabilityMatrix\Data\Packages\Stable Diffusion WebUI Forge\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable --- ```

Oh and generate button becomes unresponsive as well.

Lyudamil commented 7 months ago

I have the same problem. I've noticed that it also occurs depending on the sampler used, for example only the turbo samplers (Euler A Turbo, DPM++ 2M Turbo, and DPM++ SDE 2M Turbo) work to generate an image while the rest produce this error.

ostap667inbox commented 7 months ago

I have the same problem. I've noticed that it also occurs depending on the sampler used, for example only the turbo samplers (Euler A Turbo, DPM++ 2M Turbo, and DPM++ SDE 2M Turbo) work to generate an image while the rest produce this error.

No, I'm getting this error right now using the DPM++ 2M Turbo sampler. That's not the problem at all. :)

The error always appears during the VAE Encoding process. It is caused when the base width or height of the image is not set to a multiple of 8.

We often don't notice this when we use plugins that recalculate image size for selected aspect ratios. Here's an example: I set the base image to 1024x1024. And I need a 3:4 image. I use the 'Aspect Ratio selector' plugin. I clicked the '3:4' button and got a width of 1024 and a height of 1365. This is what becomes the problem: 1365 is not a multiple of 8. And when generating the picture normally I will get a 1024x1360 picture - such "fool-proof" is provided.

However, if I use hires.fix I will get an error during VAE encoding after upscaling - there is no such protection there. Just set initially correct sizes and there will be no error.

Vesperindustrial commented 2 weeks ago

Can confirm that this happens using the Tiled VAE with a 4:3 aspect ratio in the Never OOM Integrated plugin with hires.fix, it will generate black and occasionally with the aforementioned error. Thanks @ostap667inbox !