continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.05k stars 253 forks source link

[Bug]: SDXL error #485

Closed zopi4k closed 6 months ago

zopi4k commented 6 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

The Checpoint SD1.5 works weel... but... When I use a SDXL checkpoint I have an error and I have just one image generate.

Steps to reproduce the problem

  1. I use "sdXL_v10VAEFix.safetensors" / SD VAE : "Automatic"

  2. 1024x1024 / euler A / 20step / 7cfg

  3. "cat" in prompt

  4. Enable AnimateDiff with "hsxl_temporal_layers.f16.safetensors" or "mm_sdxl_v10_beta.ckpt"

  5. Number of frames "32" / Context batch size "16"...

  6. Run Generate

  7. 2024-03-25 10:05:53,361 - AnimateDiff - INFO - AnimateDiff process start. 2024-03-25 10:05:53,361 - AnimateDiff - INFO - Loading motion module hsxl_temporal_layers.f16.safetensors from c:\SD\Models\animatediff\hsxl_temporal_layers.f16.safetensors ** Error running before_process: C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "C:\SD\stable-diffusion-webui\modules\scripts.py", line 776, in before_process script.before_process(p, script_args) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 64, in before_process motion_module.inject(p.sd_model, params.model) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 68, in inject self.load(model_name) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 48, in load model_type = MotionModuleType.get_mm_type(mm_state_dict) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 33, in get_mm_type if 32 in next((state_dict[key] for key in state_dict if 'pe' in key), None).shape: AttributeError: 'NoneType' object has no attribute 'shape'


0%| | 0/20 [00:00<?, ?it/s]2024-03-25 10:05:53,828 - AnimateDiff - WARNING - No motion module detected, falling back to the original forward. You are most likely using !Adetailer. !Adetailer post-process your outputs sequentially, and there will NOT be motion module in your UNet, so there might be NO temporal consistency within the inpainted face. Use at your own risk. If you really want to pursue inpainting with AnimateDiff inserted into UNet, use Segment Anything to generate masks for each frame and inpaint them with AnimateDiff + ControlNet. Note that my proposal might be good or bad, do your own research to figure out the best way. 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.60it/s]

What should have happened?

a cat animation ... but I have only one picture out

Commit where the problem happens

webui: 1.8.0 extension: last 2.0.0 animatediff only image

What browsers do you use to access the UI ?

Chrome

Command Line Arguments

--xformers --xformers-flash-attention --opt-sdp-attention  --lora-dir 'c:\SD\Models\Lora'

And I use the issu "Path to save AnimateDiff motion modules : c:\SD\Models\animatediff\"

Console logs

Chrome F12 : nothing in console

Web-ui console :

2024-03-25 10:05:53,361 - AnimateDiff - INFO - AnimateDiff process start. 2024-03-25 10:05:53,361 - AnimateDiff - INFO - Loading motion module hsxl_temporal_layers.f16.safetensors from c:\SD\Models\animatediff\hsxl_temporal_layers.f16.safetensors ** Error running before_process: C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "C:\SD\stable-diffusion-webui\modules\scripts.py", line 776, in before_process script.before_process(p, script_args) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 64, in before_process motion_module.inject(p.sd_model, params.model) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 68, in inject self.load(model_name) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 48, in load model_type = MotionModuleType.get_mm_type(mm_state_dict) File "C:\SD\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 33, in get_mm_type if 32 in next((state_dict[key] for key in state_dict if 'pe' in key), None).shape: AttributeError: 'NoneType' object has no attribute 'shape'


0%| | 0/20 [00:00<?, ?it/s]2024-03-25 10:05:53,828 - AnimateDiff - WARNING - No motion module detected, falling back to the original forward. You are most likely using !Adetailer. !Adetailer post-process your outputs sequentially, and there will NOT be motion module in your UNet, so there might be NO temporal consistency within the inpainted face. Use at your own risk. If you really want to pursue inpainting with AnimateDiff inserted into UNet, use Segment Anything to generate masks for each frame and inpaint them with AnimateDiff + ControlNet. Note that my proposal might be good or bad, do your own research to figure out the best way. 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.60it/s]

continue-revolution commented 6 months ago

use https://huggingface.co/conrevo/AnimateDiff-A1111/resolve/main/motion_module/mm_sdxl_hs.safetensors?download=true

continue-revolution commented 6 months ago

This is a breaking change for maintenance need. It has been documented in README page