Open Neytiri7 opened 5 months ago
I also get that error when using HIres fix, and using SwinIR_4x as an upscaler. the other methods do not cause this issue, or at least not 100% of the time. still testing. please fix SwinIR_4x as its my go-to upscaler ATM.
I also get that error when using HIres fix, and using SwinIR_4x as an upscaler. the other methods do not cause this issue, or at least not 100% of the time. still testing. please fix SwinIR_4x as its my go-to upscaler ATM.
I don't use SwinIR_4x. I mainly use R-ESRGAN 4x+.
I don't use SwinIR_4x. I mainly use R-ESRGAN 4x+.
I tried R-ESRGAN 4x+ , 10 steps, 0.4 denoise, 1.5 upscale 1024x1024 to 1536x1536, and it worked on my machine.
can you test if SwinIR_4x works for you?
SwinIR_4x is not used at all.
And, it's being reproduced intermittently, so I'm just going to use it.
I think you'll revise it one day.
I get that error as well. I'm not able to generate anything. I also noticed that for some reason, it isn't implementing ANY of my SDXL LoRAs
I get that error as well. I'm not able to generate anything. I also noticed that for some reason, it isn't implementing ANY of my SDXL LoRAs
I don't even use Lora, but it's like that.
The traceback indicates an issue with https://github.com/portu-sim/sd-webui-bmab, not Forge or the webui. Open an issue there instead. https://github.com/portu-sim/sd-webui-bmab/issues/new
The traceback indicates an issue with https://github.com/portu-sim/sd-webui-bmab, not Forge or the webui. Open an issue there instead. https://github.com/portu-sim/sd-webui-bmab/issues/new
BMAB also works normally when it's working. This problem has nothing to do with BMAB. I tried deleting it and it's the same. Other users are showing the same issue even though BMAB is not installed.
It seems that the BMAB is also affected by the error.
Can you post a traceback for an instance when BMAB is disabled completely?
To load target model SDXL
Begin to load 1 model
loading in lowvram mode 1078.425479888916
Moving model(s) has taken 0.66 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:38<00:00, 1.93s/it]
To load target model AutoencoderKL███████████████████████████████████████████████████▍ | 40/41 [02:11<00:01, 1.92s/it]
Begin to load 1 model
Moving model(s) has taken 0.60 seconds
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
res = process_images_inner(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 1290, in sample
sd_models.reload_model_weights(info=self.hr_checkpoint_info)
TypeError: reload_model_weights() got an unexpected keyword argument 'info'
reload_model_weights() got an unexpected keyword argument 'info'
*** Error completing request
*** Arguments: ('task(d5oe7iluiyh5scd)', <gradio.routes.Request object at 0x0000023210C26CB0>, '__positive__,\nBREAK\n\n__001/hair_style_women__ hair, __001/color__ color hair, __001/color__ color eyes, __001/expression__, __001/makeup__, __001/makeup_eyes__, __001/breast_size__, __001/body_type__ body, __001/gaze__, __001/frame__,\nBREAK\n\n{hanbok|__001/fashion_all__|__001/clothes-swimsuit__|__001/clothes-brezier__|__001/clothes-panty__|__001/female_futuristic_clothing__|__001/clothes-lingerie__|__001/female_top__, __001/female_bottom__|__001/female_see_through_clothes__|__001/female_undies__|__001/clothes-preppy_look__|__001/clothes-women_costume__|__001/clothes-women_suit__|__001/clothes-dress__|__001/shorts__|__001/skirt__|__001/wedding_dress__|__001/fashion_all__|__001/fashion_spring__|__001/fashion_summer__|__001/fashion_fall__|__001/fashion_winter__}, {__001/clothes-stockings__|__001/clothes-shoes__|bare foot},\nBREAK\n\n{__001/19_places__|__001/background__|__001/best_cities__|__001/landmark__|__001/landscapes__|__001/landscape_composition__|__001/flower_garden__|__001/place__|__001/ocean__|__001/seaside_scenery__|__001/place_indoor__|__001/place_outdoor__|__001/travel_list_100__|__001/spring__|__001/summer__|__001/autumn__|__001/winter__|__001/global_destinations_500__|__001/travel_list_100__|__001/world_walks__|__001/world_small_towns__|__001/world_hikes__|__001/wonders_list__|__001/weirdest_places__}, __001/angle__,\nBREAK\n\n{__001/female-poses__|__001/pose__|__001/pose_extra__}, {daytime|evening|night|dawn|sunset|sunrise}, __001/weather__, ', '__negative__, ac_neg1,', [], 20, 'DPM++ 2M SDE Karras', 1, 1, 7, 512, 768, True, 0.39, 2, '4x-UltraMix_Balanced', 21, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, 0.5, 2, False, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000023210909960>, False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
Thank you, that's definitely more useful info. I'll re-open this now.
same error occured when try to use animatediff
** Error running before_process: D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\scripts.py", line 795, in before_process script.before_process(p, script_args) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 63, in before_process motion_module.inject(p.sd_model, params.model) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 112, in inject self._set_ddim_alpha(sd_model) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 178, in _set_ddim_alpha self.prev_alpha_cumprod_original = sd_model.alphas_cumprod_original File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1695, in getattr raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'") AttributeError: 'LatentDiffusion' object has no attribute 'alphas_cumprod_original'
0%| | 0/20 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\script_callbacks.py", line 233, in cfg_denoiser_callback c.callback(params) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 90, in animatediff_on_cfg_denoiser ad_params.text_cond = ad_params.prompt_scheduler.multi_cond(cfg_params.text_cond, prompt_closed_loop) AttributeError: 'NoneType' object has no attribute 'multi_cond'
0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 410, in reduce return _apply_recipe(recipe, tensor, reduction_type=reduction) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 233, in _apply_recipe _reconstruct_from_shape(recipe, backend.shape(tensor)) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 198, in _reconstruct_from_shape_uncached raise EinopsError("Shape mismatch, can't divide axis of length {} in chunks of {}".format( einops.EinopsError: Shape mismatch, can't divide axis of length 2 in chunks of 16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, self.kwargs)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 921, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 1273, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in
Checklist
What happened?
An error message is displayed when performing the high resolution calibration.
Steps to reproduce the problem
What should have happened?
The high-resolution previous image should be stored, and then the high-resolution correction should be carried out.
What browsers do you use to access the UI ?
Microsoft Edge
Sysinfo
sysinfo-2024-02-14-05-09.json
Console logs
Additional information
No response