continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.12k stars 259 forks source link

[Bug]: AnimateDiff does not work #494

Closed turugi-ni closed 8 months ago

turugi-ni commented 8 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

AnimateDiff does not work Even after writing the prompt, setting AnimateDiff to Enable, setting the necessary items, and clicking the Generate button, the animation is not created and the program exits without creating an animation. The following is the contents of the command prompt at that time. 2024-04-02 23:56:46,033 - AnimateDiff - INFO - AnimateDiff process start. 2024-04-02 23:56:46,034 - AnimateDiff - INFO - Loading motion module v3_sd15_mm.ckpt from D:\Touls\stable-diffusion-webui\extensions\sd-webui-animatediff\model\v3_sd15_mm.ckpt 2024-04-02 23:56:46,629 - AnimateDiff - INFO - Guessed v3_sd15_mm.ckpt architecture: MotionModuleType.AnimateDiffV3 2024-04-02 23:56:48,718 - AnimateDiff - INFO - Injecting motion module v3_sd15_mm.ckpt into SD1.5 UNet input blocks. 2024-04-02 23:56:48,718 - AnimateDiff - INFO - Injecting motion module v3_sd15_mm.ckpt into SD1.5 UNet output blocks. 2024-04-02 23:56:48,718 - AnimateDiff - INFO - Setting DDIM alpha. 2024-04-02 23:56:48,742 - AnimateDiff - INFO - Injection finished. 2024-04-02 23:56:48,743 - AnimateDiff - INFO - AnimateDiff + ControlNet will generate 160 frames. loading network D:\Touls\stable-diffusion-webui\models\Lora\TOOL\v3_sd15_adapter.ckpt: AssertionError 0it [00:00, ?it/s] Traceback (most recent call last): File "D:\Touls\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 280, in load_networks net = load_network(name, network_on_disk) File "D:\Touls\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 224, in load_network raise AssertionError(f"Could not find a module type (out of {', '.join([x.class.name for x in module_types])}) that would accept those keys: {', '.join(weights.w)}") AssertionError: Could not find a module type (out of ModuleTypeLora, ModuleTypeHada, ModuleTypeIa3, ModuleTypeLokr, ModuleTypeFull, ModuleTypeNorm, ModuleTypeGLora, ModuleTypeOFT) that would accept those keys: 0.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight

loading network D:\Touls\stable-diffusion-webui\models\Lora\TOOL\v3_sd15_adapter.ckpt: AssertionError Traceback (most recent call last): File "D:\Touls\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 280, in load_networks net = load_network(name, network_on_disk) File "D:\Touls\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 224, in load_network raise AssertionError(f"Could not find a module type (out of {', '.join([x.class.name for x in module_types])}) that would accept those keys: {', '.join(weights.w)}") AssertionError: Could not find a module type (out of ModuleTypeLora, ModuleTypeHada, ModuleTypeIa3, ModuleTypeLokr, ModuleTypeFull, ModuleTypeNorm, ModuleTypeGLora, ModuleTypeOFT) that would accept those keys: 0.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight

loading network D:\Touls\stable-diffusion-webui\models\Lora\TOOL\v3_sd15_adapter.ckpt: AssertionError Traceback (most recent call last): File "D:\Touls\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 280, in load_networks net = load_network(name, network_on_disk) File "D:\Touls\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 224, in load_network raise AssertionError(f"Could not find a module type (out of {', '.join([x.class.name for x in module_types])}) that would accept those keys: {', '.join(weights.w)}") AssertionError: Could not find a module type (out of ModuleTypeLora, ModuleTypeHada, ModuleTypeIa3, ModuleTypeLokr, ModuleTypeFull, ModuleTypeNorm, ModuleTypeGLora, ModuleTypeOFT) that would accept those keys: 0.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 0.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 0.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 1.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 1.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 2.attentions.0.transformer_blocks.0.attn2.processor.to_out_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_q_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_k_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_v_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn1.processor.to_out_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_q_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_k_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_v_lora.up.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.down.weight, 2.attentions.1.transformer_blocks.0.attn2.processor.to_out_lora.up.weight

0%| | 0/30 [00:00<?, ?it/s]2024-04-02 23:56:49,968 - AnimateDiff - INFO - inner model forward hooked C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [0,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [1,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [2,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [3,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [4,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [5,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [6,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [7,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [8,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [9,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [10,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [11,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [12,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [13,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [14,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [15,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [16,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [17,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [18,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [19,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [20,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [21,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [22,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [23,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [24,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [25,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [26,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [27,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [28,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [29,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [30,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [794,0,0], thread: [31,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. 0%| | 0/30 [00:00<?, ?it/s] Exception in thread MemMon: Traceback (most recent call last): File "D:\Touls\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "D:\Touls\stable-diffusion-webui\modules\memmon.py", line 53, in run free, total = self.cuda_mem_get_info() File "D:\Touls\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info return torch.cuda.mem_get_info(index) File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Error completing request Arguments: ('task(r30auw85ydpp0am)', <gradio.routes.Request object at 0x00000238B185DDB0>, 'masterpiece, nightclub,\n(AS-YoungerV2:1.3), , ,\nbelly dance,\nfull body, solo, 1girl, loli, kawaii, 8yo, flat chest, round ass,\n,\n<lora:scheherazade_(fate)v1:1>, scheherazade(fate), arm wrap, armlet, armor, bandaged arm, bandages, bangs, bare shoulders, black hair, blue armor, bracelet, breastplate, circlet, dark skin, dark-skinned female, facial mark, forehead jewel, green eyes, hat, high heels, jewelry, long hair, mouth veil, necklace, parted bangs, pauldrons, pelvic curtain, ring, see-through, shoulder armor, thong, veil, white panties, white thighhighs,\n,', 'easynegative, AS-YoungV2-neg, antiBlurV2, (bad_prompt_version2:0.8), bad-hands-5, negative_hand, ugly face, signature, watermark, amputee, censored, text, head_out_of_frame, feet out of frame, nude, completely nude, hat, shorts, denim_shorts,', [], 30, 'Euler a', 1, 1, 7, 768, 512, True, 0.5, 1, 'R-ESRGAN 4x+ Anime6B', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', True, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, False, 3072, 192, True, True, True, False, <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000238B185DCC0>, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, '', 0.5, True, False, '', 'Lerp', False, False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, True, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, False, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {} Traceback (most recent call last): File "D:\Touls\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "D:\Touls\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "D:\Touls\stable-diffusion-webui\modules\txt2img.py", line 110, in txt2img processed = processing.process_images(p) File "D:\Touls\stable-diffusion-webui\modules\processing.py", line 785, in process_images res = process_images_inner(p) File "D:\Touls\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 48, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "D:\Touls\stable-diffusion-webui\modules\processing.py", line 921, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\Touls\stable-diffusion-webui\modules\processing.py", line 1257, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "D:\Touls\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "D:\Touls\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "D:\Touls\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\Touls\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, *extra_args) File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\Touls\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 256, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b])) File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\Touls\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 164, in mm_sd_forward x_in[_context], sigma_in[_context], RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.


Traceback (most recent call last): File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, **kwargs) File "D:\Touls\stable-diffusion-webui\modules\call_queue.py", line 77, in f devices.torch_gc() File "D:\Touls\stable-diffusion-webui\modules\devices.py", line 81, in torch_gc torch.cuda.empty_cache() File "D:\Touls\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. I don't know because I have no knowledge of Python or any other programming, etc.

Steps to reproduce the problem

AnimateDiff does not work Even after writing the prompt, setting AnimateDiff to Enable, setting the necessary items, and clicking the Generate button, the animation is not created and the program exits without creating an animation.

What should have happened?

If it works correctly, a 20-second animation should be generated.

Commit where the problem happens

webui: 1.8.0 extension: sd-webui-animatediffv2.0.0-a

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

no

Console logs

Running Stable-Diffusion-Webui-Civitai-Helper on Gradio Version: 3.23.0
index-8b1603a4.js:183 Could not find "window.__TAURI_METADATA__". The "appWindow" value will reference the "main" window label.
Note that this is not an issue if running this frontend on a browser instead of a Tauri window.
(anonymous) @ index-8b1603a4.js:183
index-8b1603a4.js:189 Object
index-8b1603a4.js:208 undefined
index-8b1603a4.js:193 Splitpanes: Could not resize panes correctly due to their constraints.
(anonymous) @ index-8b1603a4.js:193
index.js?1710991491.0858984:108 iib-message: Object
index.js?1710991491.0858984:108 iib-message: Object
Blocks.svelte:258 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'props')
    at Blocks.svelte:258:11
    at Array.forEach (<anonymous>)
    at qe (Blocks.svelte:256:9)
    at Blocks.svelte:376:7

Additional information

No response

rookiemann commented 8 months ago

Try making your prompt smaller. When I keep my prompts under 75 words in both positive and negative prompts it doesn't do this error. When I go over 75 I get this error every time.

xxxsnacks commented 8 months ago

Im having this same problem

continue-revolution commented 8 months ago

@rookiemann read a pinned issue. the prompt length problem will be resolved once you've done that issue @xxxsnacks @turugi-ni if you have some lora problem, you are supposed to use the adapter from my hf repo. the original one won't work.

lxyh2 commented 5 months ago

@rookiemann read a pinned issue. the prompt length problem will be resolved once you've done that issue @xxxsnacks @turugi-ni if you have some lora problem, you are supposed to use the adapter from my hf repo. the original one won't work. I have the same problem. How can I solve it