lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
5.43k stars 543 forks source link

[Bug]: `TypeError` when upscaling #243

Open Neytiri7 opened 5 months ago

Neytiri7 commented 5 months ago

Checklist

What happened?

An error message is displayed when performing the high resolution calibration.

Steps to reproduce the problem

  1. Create
  2. I'm waiting.
  3. Check that the high-resolution calibration is in progress.
  4. Check for an error message.

What should have happened?

The high-resolution previous image should be stored, and then the high-resolution correction should be carried out.

What browsers do you use to access the UI ?

Microsoft Edge

Sysinfo

sysinfo-2024-02-14-05-09.json

Console logs

*** Error completing request
*** Arguments: ('task(muozxcbc2hle9j7)', <gradio.routes.Request object at 0x00000240D0809030>, '__positive__,\nBREAK\n\n__001/hair_style_women__ hair, __001/color__ color hair, __001/color__ color eyes, __001/expression__, __001/makeup__, __001/makeup_eyes__, __001/breast_size__, __001/body_type__ body, __001/gaze__, __001/frame__,\nBREAK\n\n{hanbok|__001/fashion_all__|__001/clothes-swimsuit__|__001/clothes-brezier__|__001/clothes-panty__|__001/female_futuristic_clothing__|__001/clothes-lingerie__|__001/female_top__, __001/female_bottom__|__001/female_see_through_clothes__|__001/female_undies__|__001/clothes-preppy_look__|__001/clothes-women_costume__|__001/clothes-women_suit__|__001/clothes-dress__|__001/shorts__|__001/skirt__|__001/wedding_dress__|__001/fashion_all__|__001/fashion_spring__|__001/fashion_summer__|__001/fashion_fall__|__001/fashion_winter__}, {__001/clothes-stockings__|__001/clothes-shoes__|bare foot},\nBREAK\n\n{__001/19_places__|__001/background__|__001/best_cities__|__001/landmark__|__001/landscapes__|__001/landscape_composition__|__001/flower_garden__|__001/place__|__001/ocean__|__001/seaside_scenery__|__001/place_indoor__|__001/place_outdoor__|__001/travel_list_100__|__001/spring__|__001/summer__|__001/autumn__|__001/winter__|__001/global_destinations_500__|__001/travel_list_100__|__001/world_walks__|__001/world_small_towns__|__001/world_hikes__|__001/wonders_list__|__001/weirdest_places__}, __001/angle__,\nBREAK\n\n{__001/female-poses__|__001/pose__|__001/pose_extra__}, {daytime|evening|night|dawn|sunset|sunrise}, __001/weather__, ', '__negative__, ac_neg1,', [], 20, 'DPM++ SDE Karras', 1, 1, 7, 768, 512, True, 0.37, 2, 'R-ESRGAN 4x+', 20, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000240D080A170>, False, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, 0.15, 3, 0.4, 4, 'bicubic', 0.5, 2, True, False, True, False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0, 1, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 0.95, 1, 1.3, 1, 0, 0, 0, None, False, 1, False, '', True, False, False, True, True, 4, 2, 0.1, 1, 1, 0, 0.4, 7, True, False, True, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'Ultralytics(face_yolov8m.pt)', 0.4, 4, 0.3, False, 0.26, True, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Whole picture', 32, '', False, False, False, 0.4, 0, 1, False, 'Inpaint', 0.85, 0.4, 10, False, True, 'None', 1.5, 'None', 'nomal', 'None', False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---
To load target model SDXL
Begin to load 1 model
loading in lowvram mode 726.4464387893677
Moving model(s) has taken 0.46 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:23<00:00,  7.16s/it]
To load target model AutoencoderKL1s/it]
Begin to load 1 model
Moving model(s) has taken 0.46 seconds
Traceback (most recent call last):
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
    res = process_images_inner(p)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions\sd-webui-bmab\sd_bmab\sd_override\txt2img.py", line 50, in sample
    sd_models.reload_model_weights(info=self.hr_checkpoint_info)
TypeError: reload_model_weights() got an unexpected keyword argument 'info'
reload_model_weights() got an unexpected keyword argument 'info'
*** Error completing request
*** Arguments: ('task(ku1303dy4g62xpr)', <gradio.routes.Request object at 0x00000240D0809780>, '__positive__,\nBREAK\n\n__001/hair_style_women__ hair, __001/color__ color hair, __001/color__ color eyes, __001/expression__, __001/makeup__, __001/makeup_eyes__, __001/breast_size__, __001/body_type__ body, __001/gaze__, __001/frame__,\nBREAK\n\n{hanbok|__001/fashion_all__|__001/clothes-swimsuit__|__001/clothes-brezier__|__001/clothes-panty__|__001/female_futuristic_clothing__|__001/clothes-lingerie__|__001/female_top__, __001/female_bottom__|__001/female_see_through_clothes__|__001/female_undies__|__001/clothes-preppy_look__|__001/clothes-women_costume__|__001/clothes-women_suit__|__001/clothes-dress__|__001/shorts__|__001/skirt__|__001/wedding_dress__|__001/fashion_all__|__001/fashion_spring__|__001/fashion_summer__|__001/fashion_fall__|__001/fashion_winter__}, {__001/clothes-stockings__|__001/clothes-shoes__|bare foot},\nBREAK\n\n{__001/19_places__|__001/background__|__001/best_cities__|__001/landmark__|__001/landscapes__|__001/landscape_composition__|__001/flower_garden__|__001/place__|__001/ocean__|__001/seaside_scenery__|__001/place_indoor__|__001/place_outdoor__|__001/travel_list_100__|__001/spring__|__001/summer__|__001/autumn__|__001/winter__|__001/global_destinations_500__|__001/travel_list_100__|__001/world_walks__|__001/world_small_towns__|__001/world_hikes__|__001/wonders_list__|__001/weirdest_places__}, __001/angle__,\nBREAK\n\n{__001/female-poses__|__001/pose__|__001/pose_extra__}, {daytime|evening|night|dawn|sunset|sunrise}, __001/weather__, ', '__negative__, ac_neg1,', [], 20, 'DPM++ SDE Karras', 1, 1, 7, 768, 512, True, 0.37, 2, 'R-ESRGAN 4x+', 20, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000240D080B7F0>, False, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, 0.15, 3, 0.4, 4, 'bicubic', 0.5, 2, True, False, True, False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0, 1, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 0.95, 1, 1.3, 1, 0, 0, 0, None, False, 1, False, '', True, False, False, True, True, 4, 2, 0.1, 1, 1, 0, 0.4, 7, True, False, True, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'Ultralytics(face_yolov8m.pt)', 0.4, 4, 0.3, False, 0.26, True, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Whole picture', 32, '', False, False, False, 0.4, 0, 1, False, 'Inpaint', 0.85, 0.4, 10, False, True, 'None', 1.5, 'None', 'nomal', 'None', False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---
activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x00000240D0CB2DA0>]: AttributeError
Traceback (most recent call last):
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\extra_networks.py", line 135, in activate
    extra_network.activate(p, extra_network_args)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions-builtin\Lora\extra_networks_lora.py", line 43, in activate
    networks.load_networks(names, te_multipliers, unet_multipliers, dyn_dims)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions-builtin\Lora\networks.py", line 51, in load_networks
    compiled_lora_targets.append([a.filename, b, c])
AttributeError: 'NoneType' object has no attribute 'filename'

To load target model SDXL
Begin to load 1 model
loading in lowvram mode 722.5157747268677
Moving model(s) has taken 0.46 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:13<00:00,  3.69s/it]
To load target model AutoencoderKL1s/it]
Begin to load 1 model
Moving model(s) has taken 0.39 seconds
Traceback (most recent call last):
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
    res = process_images_inner(p)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions\sd-webui-bmab\sd_bmab\sd_override\txt2img.py", line 50, in sample
    sd_models.reload_model_weights(info=self.hr_checkpoint_info)
TypeError: reload_model_weights() got an unexpected keyword argument 'info'
reload_model_weights() got an unexpected keyword argument 'info'
*** Error completing request
*** Arguments: ('task(1hpej9ze7mc1r0u)', <gradio.routes.Request object at 0x0000023CAE602050>, '<lora:adapted_model_converted:0.7> (RAW photo:1.4, best quality:1.4, photo realistic:1.4, realistic:1.4), (cute korean girl), (1girl,solo), detailed background, pale skin, (intricate details:1.3), perfect eyes, navel, cameltoe, covered nipple:0.1 sharp_pointed_nose:1.4, (detailed skin:1.3), sharp focus, delicate,\nBREAK\n\nVintage waves hair, aqua_blue color hair, Sepia color eyes, Grateful, Makeup remover, cream eyeliner, huge breasts, elegant body, looking at another, upper body,\nBREAK\n\nDenim_button-up_shirtcorduroy_skirtwhite_ankle_bootsblack_crossbody_baglayered_necklace_set, Peep-toe booties,\nBREAK\n\nHawaii Volcanoes National Park, vanishing point,\nBREAK\n\nPosing with hands behind the back, looking serious, sunset, Typhoon,', 'EasyNegativeV2, nsfw, (worst quality, low quality, normal quality:1.3), (deformed, distorted, disfigured:1.2), (blurry:1.2), (bad anatomy, extra_anatomy:1.3, wrong anatomy), poorly drawn, ugly face, glans, fat, missing fingers, extra fingers, extra arms, extra legs, ((watermark, text, logo,symbol)), extra limb, missing limb, floating limbs, error, jpeg artifacts, cropped, bad anatomy, double navel, muscle, cleavage, bad detailed background, (stomach muscles), (nipple over clothes:1.2), (nipples sticking out of clothes:1.2), ((abs:1.2)), ((stomach muscles:1.2)), (mutated hands and fingers:1.2), disconnected limbs, mutation, mutated,', [], 20, 'DPM++ 2M SDE Karras', 1, 1, 7, 768, 512, True, 0.39, 2, '4x-UltraMix_Balanced', 21, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000240D082B4C0>, True, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, 0.15, 3, 0.4, 4, 'bicubic', 0.5, 2, True, False, True, False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0, 1, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 0.95, 1, 1.3, 1, 0, 0, 0, None, False, 1, False, '', True, False, False, True, True, 4, 2, 0.1, 1, 1, 0, 0.4, 7, True, False, True, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'Ultralytics(face_yolov8m.pt)', 0.4, 4, 0.3, False, 0.26, True, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Whole picture', 32, '', False, False, False, 0.4, 0, 1, False, 'Inpaint', 0.85, 0.4, 10, False, True, 'None', 1.5, 'None', 'nomal', 'None', False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

Additional information

No response

ikoseu commented 5 months ago

I also get that error when using HIres fix, and using SwinIR_4x as an upscaler. the other methods do not cause this issue, or at least not 100% of the time. still testing. please fix SwinIR_4x as its my go-to upscaler ATM.

Neytiri7 commented 5 months ago

I also get that error when using HIres fix, and using SwinIR_4x as an upscaler. the other methods do not cause this issue, or at least not 100% of the time. still testing. please fix SwinIR_4x as its my go-to upscaler ATM.

I don't use SwinIR_4x. I mainly use R-ESRGAN 4x+.

ikoseu commented 5 months ago

I don't use SwinIR_4x. I mainly use R-ESRGAN 4x+.

I tried R-ESRGAN 4x+ , 10 steps, 0.4 denoise, 1.5 upscale 1024x1024 to 1536x1536, and it worked on my machine.

can you test if SwinIR_4x works for you?

Neytiri7 commented 5 months ago

SwinIR_4x is not used at all.

And, it's being reproduced intermittently, so I'm just going to use it.

I think you'll revise it one day.

DiggyDre commented 5 months ago

I get that error as well. I'm not able to generate anything. I also noticed that for some reason, it isn't implementing ANY of my SDXL LoRAs

Neytiri7 commented 5 months ago

I get that error as well. I'm not able to generate anything. I also noticed that for some reason, it isn't implementing ANY of my SDXL LoRAs

I don't even use Lora, but it's like that.

catboxanon commented 5 months ago

The traceback indicates an issue with https://github.com/portu-sim/sd-webui-bmab, not Forge or the webui. Open an issue there instead. https://github.com/portu-sim/sd-webui-bmab/issues/new

Neytiri7 commented 5 months ago

The traceback indicates an issue with https://github.com/portu-sim/sd-webui-bmab, not Forge or the webui. Open an issue there instead. https://github.com/portu-sim/sd-webui-bmab/issues/new

BMAB also works normally when it's working. This problem has nothing to do with BMAB. I tried deleting it and it's the same. Other users are showing the same issue even though BMAB is not installed.

It seems that the BMAB is also affected by the error.

catboxanon commented 5 months ago

Can you post a traceback for an instance when BMAB is disabled completely?

Neytiri7 commented 5 months ago
To load target model SDXL
Begin to load 1 model
loading in lowvram mode 1078.425479888916
Moving model(s) has taken 0.66 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:38<00:00,  1.93s/it]
To load target model AutoencoderKL███████████████████████████████████████████████████▍ | 40/41 [02:11<00:01,  1.92s/it]
Begin to load 1 model
Moving model(s) has taken 0.60 seconds
Traceback (most recent call last):
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
    res = process_images_inner(p)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 1290, in sample
    sd_models.reload_model_weights(info=self.hr_checkpoint_info)
TypeError: reload_model_weights() got an unexpected keyword argument 'info'
reload_model_weights() got an unexpected keyword argument 'info'
*** Error completing request
*** Arguments: ('task(d5oe7iluiyh5scd)', <gradio.routes.Request object at 0x0000023210C26CB0>, '__positive__,\nBREAK\n\n__001/hair_style_women__ hair, __001/color__ color hair, __001/color__ color eyes, __001/expression__, __001/makeup__, __001/makeup_eyes__, __001/breast_size__, __001/body_type__ body, __001/gaze__, __001/frame__,\nBREAK\n\n{hanbok|__001/fashion_all__|__001/clothes-swimsuit__|__001/clothes-brezier__|__001/clothes-panty__|__001/female_futuristic_clothing__|__001/clothes-lingerie__|__001/female_top__, __001/female_bottom__|__001/female_see_through_clothes__|__001/female_undies__|__001/clothes-preppy_look__|__001/clothes-women_costume__|__001/clothes-women_suit__|__001/clothes-dress__|__001/shorts__|__001/skirt__|__001/wedding_dress__|__001/fashion_all__|__001/fashion_spring__|__001/fashion_summer__|__001/fashion_fall__|__001/fashion_winter__}, {__001/clothes-stockings__|__001/clothes-shoes__|bare foot},\nBREAK\n\n{__001/19_places__|__001/background__|__001/best_cities__|__001/landmark__|__001/landscapes__|__001/landscape_composition__|__001/flower_garden__|__001/place__|__001/ocean__|__001/seaside_scenery__|__001/place_indoor__|__001/place_outdoor__|__001/travel_list_100__|__001/spring__|__001/summer__|__001/autumn__|__001/winter__|__001/global_destinations_500__|__001/travel_list_100__|__001/world_walks__|__001/world_small_towns__|__001/world_hikes__|__001/wonders_list__|__001/weirdest_places__}, __001/angle__,\nBREAK\n\n{__001/female-poses__|__001/pose__|__001/pose_extra__}, {daytime|evening|night|dawn|sunset|sunrise}, __001/weather__, ', '__negative__, ac_neg1,', [], 20, 'DPM++ 2M SDE Karras', 1, 1, 7, 512, 768, True, 0.39, 2, '4x-UltraMix_Balanced', 21, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, 0.5, 2, False, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000023210909960>, False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable
catboxanon commented 5 months ago

Thank you, that's definitely more useful info. I'll re-open this now.

miaoshouai commented 4 months ago

same error occured when try to use animatediff

** Error running before_process: D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\scripts.py", line 795, in before_process script.before_process(p, script_args) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 63, in before_process motion_module.inject(p.sd_model, params.model) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 112, in inject self._set_ddim_alpha(sd_model) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 178, in _set_ddim_alpha self.prev_alpha_cumprod_original = sd_model.alphas_cumprod_original File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1695, in getattr raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'") AttributeError: 'LatentDiffusion' object has no attribute 'alphas_cumprod_original'


0%| | 0/20 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\script_callbacks.py", line 233, in cfg_denoiser_callback c.callback(params) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 90, in animatediff_on_cfg_denoiser ad_params.text_cond = ad_params.prompt_scheduler.multi_cond(cfg_params.text_cond, prompt_closed_loop) AttributeError: 'NoneType' object has no attribute 'multi_cond'


0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 410, in reduce return _apply_recipe(recipe, tensor, reduction_type=reduction) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 233, in _apply_recipe _reconstruct_from_shape(recipe, backend.shape(tensor)) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 198, in _reconstruct_from_shape_uncached raise EinopsError("Shape mismatch, can't divide axis of length {} in chunks of {}".format( einops.EinopsError: Shape mismatch, can't divide axis of length 2 in chunks of 16

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules_forge\main_thread.py", line 37, in loop task.work() File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, self.kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 752, in process_images res = process_images_inner(p) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 921, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 1273, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_common.py", line 263, in launch_sampling return func() File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] s_in, extra_args) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_cfg_denoiser.py", line 182, in forward denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params, File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules_forge\forge_sampler.py", line 88, in forge_sample denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\ldm_patched\modules\samplers.py", line 289, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\ldm_patched\modules\model_base.py", line 90, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, *extra_conds).float() File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 61, in forward_timestep_embed x = layer(x) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 132, in forward return self.temporal_transformer(x) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 190, in forward hidden_states = block(hidden_states) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 244, in forward hidden_states = attention_block(norm_hidden_states) + hidden_states File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 333, in forward x = rearrange(x, "(b f) d c -> (b d) f c", f=video_length) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 487, in rearrange return reduce(tensor, pattern, reduction='rearrange', axes_lengths) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 418, in reduce raise EinopsError(message + '\n {}'.format(e)) einops.EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16 Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16 ** Error completing request Arguments: ('task(4m9exf669adfqba)', <gradio.routes.Request object at 0x000001F7C844C0D0>, 'closeup portrait photo of beautiful 26 y.o woman, makeup, 8k uhd, high quality, dramatic, cinematic\n', '(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime),text,cropped,out of frame,worst quality,low quality,jpeg artifacts,ugly,duplicate,morbid,mutilated,extra fingers,mutated hands,poorly drawn hands,poorly drawn face,mutation,deformed,blurry,dehydrated,bad anatomy,bad proportions,extra limbs,cloned face,disfigured,gross proportions,malformed limbs,missing arms,missing legs,extra arms,extra legs,fused fingers,too many fingers,long neck,', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, 850651337, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001F7EEFBFA30>, <PIL.Image.Image image mode=RGB size=1242x1206 at 0x1F7EEFF5E10>, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, False, False, False, False, False, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='InsightFace (InstantID)', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=0.5, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable