continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.05k stars 253 forks source link

[Bug]: AnimateDiff doesn't work with IP Adapter Plus V2 #508

Open yengalvez opened 5 months ago

yengalvez commented 5 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

I have disabled all extensions except the latest controlnet and animatediff. I can use the Face Plus V2 IP adapter correctly, but the moment I activate it I get this issue: AttributeError: 'FaceIdPlusInput' object has no attribute 'shape'

Looking at the complete log it seems that the problem is caused by a bug in the AnimateDiff.

Steps to reproduce the problem

  1. Activate Controlnet IpAdapter Faceplus V2 with only 1 image
  2. Activate Animate diff with default setting
  3. Generate

What should have happened?

Generate a video with the Face

Commit where the problem happens

extension: 85a854b4

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--medvram --xformers

Console logs

0%|                                                                                           | 0/20 [00:00<?, ?it/s]2024-04-19 22:45:42,204 - AnimateDiff - INFO - inner model forward hooked
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(gfztck8t1f9hvgv)', <gradio.routes.Request object at 0x000001BD980E1C60>, 'toon, cartoon, image of beautiful man, space, light, particles, god, space light, galaxy dust   <lora:ip-adapter-faceid-plusv2_sd15_lora:1>', 'ugly, bad draw, deformed, bad eyes', [], 1, 1, 1.5, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 3M SDE', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001BD980E3340>, UiControlNetUnit(enabled=True, module='ip-adapter_face_id_plus', model='ip-adapter-faceid-plusv2_sd15 [6e14fc1a]', weight=1, image={'image': array([[[142, 145, 154],
***         [144, 147, 156],
***         [142, 145, 154],
***         ...,
***         [ 73,  73, 123],
***         [ 80,  80, 130],
***         [ 82,  82, 132]],
***
***        [[150, 153, 162],
***         [152, 155, 164],
***         [149, 152, 161],
***         ...,
***         [ 75,  75, 125],
***         [ 79,  79, 129],
***         [ 79,  79, 129]],
***
***        [[142, 145, 154],
***         [146, 149, 158],
***         [146, 149, 158],
***         ...,
***         [ 79,  79, 129],
***         [ 81,  81, 131],
***         [ 79,  79, 129]],
***
***        ...,
***
***        [[131, 121, 111],
***         [130, 120, 110],
***         [125, 115, 105],
***         ...,
***         [ 31,  32,  53],
***         [ 29,  30,  51],
***         [ 27,  28,  49]],
***
***        [[119, 109,  99],
***         [117, 107,  97],
***         [115, 105,  95],
***         ...,
***         [ 23,  24,  45],
***         [ 26,  27,  48],
***         [ 29,  30,  51]],
***
***        [[103,  93,  83],
***         [ 98,  88,  78],
***         [ 99,  89,  79],
***         ...,
***         [ 12,  13,  34],
***         [ 19,  20,  41],
***         [ 25,  26,  47]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='ControlNet is more important', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, ipadapter_input=None), UiControlNetUnit(enabled=False, module='ip-adapter-auto', model='ip-adapter-faceid-plusv2_sd15 [6e14fc1a]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, ipadapter_input=None), UiControlNetUnit(enabled=False, module='ip-adapter-auto', model='ip-adapter-faceid-plusv2_sd15 [6e14fc1a]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, ipadapter_input=None), True, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "A:\Program Files\A1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 48, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "A:\Program Files\A1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 449, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\processing.py", line 1328, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "A:\Program Files\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "A:\Program Files\A1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 668, in sample_dpmpp_3m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "A:\Program Files\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "A:\Program Files\A1111\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "A:\Program Files\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "A:\Program Files\A1111\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 162, in mm_sd_forward
        mm_cn_select(_context)
      File "A:\Program Files\A1111\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 116, in mm_cn_select
        if control.hint_cond.shape[0] > len(context):
    AttributeError: 'FaceIdPlusInput' object has no attribute 'shape'

Additional information

No response