continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.11k stars 258 forks source link

[Bug]: #200

Closed Apenys closed 1 year ago

Apenys commented 1 year ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

Animatediff can't be used

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ... pressing run will cause an error

What should have happened?

generate image

Commit where the problem happens

webui: Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

no

Console logs

To create a public link, set `share=True` in `launch()`.
Startup time: 16.8s (prepare environment: 4.4s, import torch: 3.3s, import gradio: 1.2s, setup paths: 0.7s, initialize shared: 0.3s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 2.4s, create ui: 1.0s, gradio launch: 2.5s).
2023-10-11 17:48:30,857 - AnimateDiff - INFO - AnimateDiff process start.
2023-10-11 17:48:30,857 - AnimateDiff - INFO - You are using mm_sd_v15_v2.ckpt, which has been tested and supported.
2023-10-11 17:48:30,857 - AnimateDiff - INFO - Loading motion module mm_sd_v15_v2.ckpt from F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\model\mm_sd_v15_v2.ckpt
2023-10-11 17:48:33,590 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-10-11 17:48:34,237 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet middle block.
2023-10-11 17:48:34,237 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet input blocks.
2023-10-11 17:48:34,237 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet output blocks.
2023-10-11 17:48:34,237 - AnimateDiff - INFO - Setting DDIM alpha.
2023-10-11 17:48:34,270 - AnimateDiff - INFO - Injection finished.
2023-10-11 17:48:34,270 - AnimateDiff - INFO - Hacking lora to support motion lora
2023-10-11 17:48:34,271 - AnimateDiff - INFO - Hacking CFGDenoiser forward function.
2023-10-11 17:48:34,271 - AnimateDiff - INFO - Hacking ControlNet.
*** Error completing request
*** Arguments: ('task(ai4mbajcoqi3x7m)', '1girl is running,', '', [], 20, 'Euler a', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000211857FD900>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'mediapipe_face_full', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000211857FD240>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000021185469120>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000021185469CF0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002118546BA90>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002118373F520>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 108, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 252, in mm_cfg_forward
        x_out = mm_sd_forward(self, x_in, sigma_in, cond_in, image_cond_in, make_condition_dict) # hook
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 167, in mm_sd_forward
        out = self.inner_model(x_in[_context], sigma_in[_context], cond=make_condition_dict(cond_in[_context], image_cond_in[_context]))
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 86, in mm_tes_forward
        x = layer(x, context)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\motion_module.py", line 86, in forward
        return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\motion_module.py", line 150, in forward
        hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\motion_module.py", line 212, in forward
        hidden_states = attention_block(
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\motion_module.py", line 567, in forward
        hidden_states = self._memory_efficient_attention(query, key, value, attention_mask, optimizer_name)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\extensions\sd-webui-animatediff\motion_module.py", line 467, in _memory_efficient_attention
        hidden_states = xformers.ops.memory_efficient_attention(
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\xformers\ops\fmha\__init__.py", line 193, in memory_efficient_attention
        return _memory_efficient_attention(
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\xformers\ops\fmha\__init__.py", line 291, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\xformers\ops\fmha\__init__.py", line 311, in _memory_efficient_attention_forward
        out, *_ = op.apply(inp, needs_gradient=False)
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\xformers\ops\fmha\cutlass.py", line 186, in apply
        out, lse, rng_seed, rng_offset = cls.OPERATOR(
      File "F:\sd-webui-aijinpai-v4.2\sd-webui-aijinpai-v4.2\python\lib\site-packages\torch\_ops.py", line 502, in __call__
        return self._op(*args, **kwargs or {})
    RuntimeError: CUDA error: invalid configuration argument
prompt:Python running threw an exception. Please check the troubleshooting page.
    Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

---

Additional information

It works without using this plugin

continue-revolution commented 1 year ago

172