continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.11k stars 258 forks source link

[Bug]: Inpaint Checkpoint have error message in "img2img" Batch Mask. #408

Closed zopi4k closed 8 months ago

zopi4k commented 9 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

When I want to make a mask in img2img Batch, it works with a "normal" Checkpoint. But when I want to set an "Inpainting" Checkpoint, it generates an error message.

Yet this same inpainting checkpoint work with Animatediff without "Inpaint batch mask directory" active in “Batch”. The error is only when "Inpaint batch mask directory" is active.

Steps to reproduce the problem

1 . I put in my links in "img2img" Batch : Inpaint batch mask directory image

2 . I chose an inpainting checkpoint

3 . the generation stops at the beginning then displays an error message image

What should have happened?

The problem with inpainting is that without the checkpoint "inpainting" version the rendering is duplicated around the mask, for example a lady on a background that we want to "change" with a mask will make a confusing rendering around the silhouette, generating non-existent arms and hands. The work of an inpaint checkpoint helps prevent this.

Commit where the problem happens

webui: version: [v1.7.0] •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2  •  checkpoint: [e53de42303] extension: ControlNet v1.1.435 (not active for this exemple) / Animatediff : v1.13.1

What browsers do you use to access the UI ?

Chrome Version 120.0.6099.225 (Build officiel) (64 bits)

Command Line Arguments

--autolaunch --medvram --xformers --ckpt-dir S:\Model_1.5 --no-half-vae --api --medvram-sdxl

Console logs

1 . No error in browser (F12)

2 . Error in WebUi : AttributeError: 'list' object has no attribute 'convert'

3 . Error in terminal :

*** Error completing request
*** Arguments: ('task(fd6lsiw54lax48d)', 5, 'beach, palm, sand, ocean, day', '', ['(low quality:1.3)'], None, None, None, None, None, None, None, 20, 'Euler a', 4, 0, 1, 1, 1, 7, 1.5, 0.98, 0, 800, 600, 1, 1, 0, 32, 0, 'd:\\Stable-diffusion\\test_video\\test_webcam_vs_phone\\rainbow\\2M_sam_v2_ok_mask_imgBG\\', 'd:\\Stable-diffusion\\test_video\\test_webcam_vs_phone\\rainbow\\2m_sam_v2_out\\', 'd:\\Stable-diffusion\\test_video\\test_webcam_vs_phone\\rainbow\\2M_sam_v2_Mask_BG\\', [], False, [], '', <gradio.routes.Request object at 0x0000022199103100>, 0, False, '', 0.8, 1234, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000022199103220>, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', 1, 1, ['After Upscaling/Before Restore Face'], 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '', '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion-webui\extensions\sd-webui-inpaint-difference\lib_inpaint_difference\webui_hijacks.py", line 30, in hijack_func
        return original_img2img_processing(id_task, mode, prompt, negative_prompt, prompt_styles, init_img, sketch,
      File "D:\stable-diffusion-webui\extensions\sd-webui-inpaint-background\lib_inpaint_background\webui_hijacks.py", line 26, in hijack_func
        return original_img2img_processing(id_task, mode, prompt, negative_prompt, prompt_styles, init_img, sketch,
      File "D:\stable-diffusion-webui\modules\img2img.py", line 231, in img2img
        processed = process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
      File "D:\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_i2ibatch.py", line 290, in hacked_img2img_process_batch_hijack
        return process_images(p)
      File "D:\stable-diffusion-webui\modules\processing.py", line 734, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 119, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\stable-diffusion-webui\modules\processing.py", line 804, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "D:\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_i2ibatch.py", line 184, in hacked_i2i_init
        self.image_conditioning = self.img2img_image_conditioning(image * 2 - 1, self.init_latent, image_masks) # let's ignore this image_masks which is related to inpaint model with different arch
      File "D:\stable-diffusion-webui\modules\processing.py", line 360, in img2img_image_conditioning
        return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask)
      File "D:\stable-diffusion-webui\modules\processing.py", line 319, in inpainting_image_conditioning
        conditioning_mask = np.array(image_mask.convert("L"))
    AttributeError: 'list' object has no attribute 'convert'

---

Additional information

No response

continue-revolution commented 8 months ago

new version should have fixes this.

zoucheng1991 commented 5 months ago

I'm on commit master | 67e6b96a, and this problem still exisits