pkuliyi2015 / multidiffusion-upscaler-for-automatic1111

Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
Other
4.59k stars 330 forks source link

AttributeError: 'DemoFusion' object has no attribute 'sample_one_step_local' #387

Open QR-0W opened 2 months ago

QR-0W commented 2 months ago

[Demo Fusion] ControlNet found, support is enabled. warn: noise inversion only supports the "Euler" sampler, switch to it sliently... (ext: ContrlNet) [Tiled VAE]: the input size is tiny and unnecessary to tile.

Encoding Real Image

Phase 1 Denoising

Tile size: 128, Tile count: 12, Batch size: 4, Tile batches: 3, Global batch size: 1, Global batches: 1 Error completing request Arguments: ('task(vt4mpo9hvpw6xjz)', <gradio.routes.Request object at 0x000001AF2428A4D0>, 0, 'highres, absurdres, best quality, masterpiece, flat coating, clear color, bloom, cool tone,', 'worst quality, low quality, lowres, jpeg artifacts, bad perspective, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, sketch,', [], <PIL.Image.Image image mode=RGBA size=1248x2256 at 0x1AF2295AA70>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 5, 1.5, 0.25, 0.0, 1664, 928, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 0, 35, 'Euler a', 'Automatic', False, 1, 0.5, 4, 0, 0.5, 2, -1, False, -1, 0, 0, 0, False, '', 0.8, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'Mixture of Diffusers', False, True, 1024, 1024, 96, 96, 72, 2, 'realesr-animevideov3', 2, True, 25, 5.5, 0.45, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 'DemoFusion', True, 128, 96, 4, 2, True, 25, 3.5, 0.4, 128, False, True, 3, 1, 1, True, 0.85, 0.5, 4, False, True, 4096, 128, True, True, True, True, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, '', 0.5, True, False, '', 'Lerp', False, True, False, False, True, True, 'Space', 'Dash', False, False, 0, 0, 1, 0, 0, 0, False, False, 'Straight Abs.', 'Flat', False, 'After applying other prompt processings', -1.0, 'long', '', '<|special|>, \n<|characters|>, <|copyrights|>, \n<|artist|>, \n\n<|general|>, \n\n<|quality|>, <|meta|>, <|rating|>', 1.35, 'KBlueLeaf/DanTagGen-gamma', ' CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 7, 1.5, True, '16bpc', '.tiff', 1.2, True, False, 0, 'Range', 1, 'GPU', True, False, False, False, False, 0, 512, False, 512, False, False, 3, False, 3, True, 3, False, 'Horizontal', False, False, 'u2net', False, True, True, False, 0, 2.5, 'polylines_sharp', ['left-right', 'red-cyan-anaglyph'], 2, 0, '∯boost∯clipdepth∯clipdepth_far∯clipdepth_mode∯clipdepth_near∯compute_device∯do_output_depth∯gen_normalmap∯gen_rembg∯gen_simple_mesh∯gen_stereo∯model_type∯net_height∯net_size_match∯net_width∯normalmap_invert∯normalmap_post_blur∯normalmap_post_blur_kernel∯normalmap_pre_blur∯normalmap_pre_blur_kernel∯normalmap_sobel∯normalmap_sobel_kernel∯output_depth_combine∯output_depth_combine_axis∯output_depth_invert∯pre_depth_background_removal∯rembg_model∯save_background_removal_masks∯save_outputs∯simple_mesh_occlude∯simple_mesh_spherical∯stereo_balance∯stereo_divergence∯stereo_fill_algo∯stereo_modes∯stereo_offset_exponent∯stereo_separation') {} Traceback (most recent call last): File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(
args, kwargs)) File "E:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "E:\stable-diffusion-webui\modules\img2img.py", line 232, in img2img processed = process_images(p) File "E:\stable-diffusion-webui\modules\processing.py", line 845, in process_images res = process_images_inner(p) File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "", line 219, in process_images_inner File "E:\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tileglobal.py", line 223, in p.sample = lambda conditioning, unconditional_conditioning,seeds, subseeds, subseed_strength, prompts: self.sample_hijack( File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tileglobal.py", line 349, in sample_hijack p.latents = p.sampler.sample_img2img(p,p.latents, noise , conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 252, in wrapper return fn(*args, *kwargs) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 252, in wrapper return fn(args, kwargs) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py", line 643, in sample_img2img latent = self.find_noise_for_image_sigma_adjustment(sampler.model_wrap, self.noise_inverse_steps, prompts) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 252, in wrapper return fn(*args, *kwargs) File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, *kwargs) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py", line 725, in find_noise_for_image_sigma_adjustment eps = self.get_noise(x_in c_in, t, cond_in, steps - i) File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\demofusion.py", line 351, in get_noise return self.sample_one_step_local(x_in, sigma_in, cond_in_original) AttributeError: 'DemoFusion' object has no attribute 'sample_one_step_local'


What happened?

carlosgalveias commented 2 weeks ago

getting the same issue, did you fixed it?

Jaylen-Lee commented 2 days ago

Thank you for this issue. This bug has been fixed in the newest PR. If you are using a previous version, just don't tick the NOISE INVERSION in Demofusion. Have a good time!