continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.11k stars 258 forks source link

[Bug]:云端部署无法使用AnimateDiff: Runtime Error #201

Closed 2575044704 closed 1 year ago

2575044704 commented 1 year ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

我在Kaggle的T4 x2的15G显卡里面运行出现了Error,已经初步排除不是显存的问题。可能是T4显卡不支持? RuntimeError: CUDA error: invalid configuration argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Steps to reproduce the problem

  1. Go to Kaggle Stable Diffusion
  2. Press Run All 3 Open Stable Diffusion Link
  3. enable AnimateDiff
  4. error occurred

What should have happened?

Failed to generate GIF image

Commit where the problem happens

webui: 1.6.0 extension: latest

What browsers do you use to access the UI ?

No response

Command Line Arguments

#启动参数(args)
args = [
    #'--share', #开启公网访问,不开启的话没有gradio链接
    '--xformers', # 强制使用 xformers 优化
    '--lowram', #低内存优化
    '--no-hashing', #取消模型哈希计算值,加快启动速度
    '--disable-nan-check', #取消Nan检查
    '--enable-insecure-extension-access', #强制允许在webui使用安装插件,即使开启了--share
    '--disable-console-progressbars', 
    '--enable-console-prompts', #开启控制台显示prompt
    '--no-gradio-queue',
    '--no-half-vae', #VAE开启全精度
    '--api', #搭建QQ画图机器人或者开AI画图网站接入SD要开启这个
    #'--listen',  # 在Kaggle里没用,将127.0.0.1:7860变成0.0.0.0:7860
    f'--lyco-dir {install_path}/stable-diffusion-webui/models/lyco',
    '--opt-sdp-no-mem-attention', # 加快生成速度,使用无高效内存优化的缩放点积(SDP)优化方案(限 Torch 2.x), 属于 Cross-Attention优化方案的一种,不能与--opt-sdp-attention混合使用
    '--opt-split-attention', # Cross attention layer optimization内存优化方案
]

Console logs

Traceback (most recent call last):
966.6s  3838          File "/kaggle/working/stable-diffusion-webui/modules/call_queue.py", line 57, in f
966.6s  3839            res = list(func(*args, **kwargs))
966.6s  3840          File "/kaggle/working/stable-diffusion-webui/modules/call_queue.py", line 36, in f
966.6s  3841            res = func(*args, **kwargs)
966.6s  3842          File "/kaggle/working/stable-diffusion-webui/modules/txt2img.py", line 55, in txt2img
966.6s  3843            processed = processing.process_images(p)
966.6s  3844          File "/kaggle/working/stable-diffusion-webui/modules/processing.py", line 732, in process_images
966.6s  3845            res = process_images_inner(p)
966.6s  3846          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_cn.py", line 108, in hacked_processing_process_images_hijack
966.6s  3847            return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
966.6s  3848          File "/kaggle/working/stable-diffusion-webui/modules/processing.py", line 867, in process_images_inner
966.6s  3849            samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
966.6s  3850          File "/kaggle/working/stable-diffusion-webui/modules/processing.py", line 1140, in sample
966.6s  3851            samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
966.6s  3852          File "/kaggle/working/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 235, in sample
966.6s  3853            samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
966.6s  3854          File "/kaggle/working/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
966.6s  3855            return func()
966.6s  3856          File "/kaggle/working/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 235, in <lambda>
966.6s  3857            samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
966.6s  3858          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
966.6s  3859            return func(*args, **kwargs)
966.6s  3860          File "/kaggle/working/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
966.6s  3861            denoised = model(x, sigmas[i] * s_in, **extra_args)
966.6s  3862          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3863            return forward_call(*args, **kwargs)
966.6s  3864          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_infv2v.py", line 249, in mm_cfg_forward
966.6s  3865            x_out = mm_sd_forward(self, x_in, sigma_in, cond_in, image_cond_in, make_condition_dict) # hook
966.6s  3866          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_infv2v.py", line 164, in mm_sd_forward
966.6s  3867            out = self.inner_model(x_in[_context], sigma_in[_context], cond=make_condition_dict(cond_in[_context], image_cond_in[_context]))
966.6s  3868          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3869            return forward_call(*args, **kwargs)
966.6s  3870          File "/kaggle/working/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
966.6s  3871            eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
966.6s  3872          File "/kaggle/working/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
966.6s  3873            return self.inner_model.apply_model(*args, **kwargs)
966.6s  3874          File "/kaggle/working/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
966.6s  3875            setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
966.6s  3876          File "/kaggle/working/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
966.6s  3877            return self.__orig_func(*args, **kwargs)
966.6s  3878          File "/kaggle/working/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
966.6s  3879            x_recon = self.model(x_noisy, t, **cond)
966.6s  3880          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3881            return forward_call(*args, **kwargs)
966.6s  3882          File "/kaggle/working/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
966.6s  3883            out = self.diffusion_model(x, t, context=cc)
966.6s  3884          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3885            return forward_call(*args, **kwargs)
966.6s  3886          File "/kaggle/working/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
966.6s  3887            return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
966.6s  3888          File "/kaggle/working/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward
966.6s  3889            h = module(h, emb, context)
966.6s  3890          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3891            return forward_call(*args, **kwargs)
966.6s  3892          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_mm.py", line 86, in mm_tes_forward
966.6s  3893            x = layer(x, context)
966.6s  3894          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3895            return forward_call(*args, **kwargs)
966.6s  3896          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 86, in forward
966.6s  3897            return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
966.6s  3898          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3899            return forward_call(*args, **kwargs)
966.6s  3900          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 150, in forward
966.6s  3901            hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
966.6s  3902          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3903            return forward_call(*args, **kwargs)
966.6s  3904          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 212, in forward
966.6s  3905            hidden_states = attention_block(
966.6s  3906          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
966.6s  3907            return forward_call(*args, **kwargs)
966.6s  3908          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 567, in forward
966.6s  3909            hidden_states = self._memory_efficient_attention(query, key, value, attention_mask, optimizer_name)
966.6s  3910          File "/kaggle/working/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 467, in _memory_efficient_attention
966.6s  3911            hidden_states = xformers.ops.memory_efficient_attention(
966.6s  3912          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 193, in memory_efficient_attention
966.6s  3913            return _memory_efficient_attention(
966.6s  3914          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 291, in _memory_efficient_attention
966.6s  3915            return _memory_efficient_attention_forward(
966.6s  3916          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 311, in _memory_efficient_attention_forward
966.6s  3917            out, *_ = op.apply(inp, needs_gradient=False)
966.6s  3918          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/xformers/ops/fmha/cutlass.py", line 186, in apply
966.6s  3919            out, lse, rng_seed, rng_offset = cls.OPERATOR(
966.6s  3920          File "/kaggle/working/opt/conda/envs/venv/lib/python3.10/site-packages/torch/_ops.py", line 502, in __call__
966.6s  3921            return self._op(*args, **kwargs or {})
966.6s  3922        RuntimeError: CUDA error: invalid configuration argument
966.6s  3923        CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
966.6s  3924        For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
966.6s  3925        Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
966.6s  3926    
966.6s  3927    
966.6s  3928    ---

Additional information

https://www.kaggle.com/code/l114514llove/kaggle?scriptVersionId=146119786 我的SD笔记公开了,希望大佬帮忙跑一下然后看看是什么问题。我真的很需要这个 My Stable Diffusion Notebook has been public, I hope the author can run this and check what problem it is. I really need this. QQ截图20231011194319

mbastias commented 1 year ago

This is #174 if I'm not mistaken, I'm having the same issue, apparently is because of xformers.

continue-revolution commented 1 year ago

Search before ask