guoyww / AnimateDiff

Official implementation of AnimateDiff.
https://animatediff.github.io/
Apache License 2.0
9.76k stars 799 forks source link

Wall of Error messages: "Error running before_process", "Missing keys from State_dict", "AttributeError: 'CFGDenoiserParams' object has no attribute 'denoiser'". and even "Shape mismatch, can't divide axis of length 2 in chunks of 16" #336

Open Pup-In-Cup opened 2 months ago

Pup-In-Cup commented 2 months ago

Where do I begin...? All I did was install AnimateDiff and downloaded this model from CivitAI: https://civitai.com/models/326698/animatediff-lcm-motion-model

And when I tried to do Text-to-Image (Text-to-Video?) was where the walls of errors started to happen. It's pages and pages worth of a single error-message! On top of that, I don't get the same error or behavior every time! Sometimes, A1111 WebUI does outpit a single image and a wall of errors. Other times it outputs just the wall of errors, without generating an image! When does the A1111 interface output an image, and when just errors is something that I thought I could reproduce, but I can't. It's so unpredictable.

Right then, behaviors:

  1. Outputs an image, but no video or gif file. The terminal is full of errors. Or,
  2. Doesn't output an image, and the terminal is full of errors.

How to reproduce: I have no clue! I just installed the thing!

Attempts at fixing:

  1. Updated the extensions. Didn't work.
  2. Disabled ControlNet extension. No luck.
  3. Updated A1111 WebUI. No dice.
  4. Updated pip. I guess it wasn't related.
  5. Updated the CUDA driver from 10.22 to 12-point-something. It was the newest one on the NVIDIA page. I guess that was still not related.
  6. Tried to install Torch and Xformers. It was a shot in the dark.
  7. Tried to add "--reinstall-torch --reinstall-xformers" to the batch file. (It's what made me try step 6.( No luck, either.

This is the wall of errors I get when I try to generate a video, but get a single image generated:

2024-04-25 07:54:23,570 - AnimateDiff - INFO - Loading motion module animatediffLCMMotion_v10.ckpt from C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\model\animatediffLCMMotion_v10.ckpt
2024-04-25 07:54:25,005 - AnimateDiff - INFO - Guessed animatediffLCMMotion_v10.ckpt architecture: MotionModuleType.AnimateDiffV2
*** Error running before_process: C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Diffusion\stable-diffusion-webui\modules\scripts.py", line 611, in before_process
        script.before_process(p, *script_args)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 64, in before_process
        motion_module.inject(p.sd_model, params.model)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 68, in inject
        self.load(model_name)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 52, in load
        self.mm.load_state_dict(mm_state_dict)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for MotionWrapper:
        Missing key(s) in state_dict: "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.1.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.1.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.2.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.2.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.3.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.3.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "mid_block.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "mid_block.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe".

---
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Diffusion\stable-diffusion-webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
        c.callback(params)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 88, in animatediff_on_cfg_denoiser
        if cfg_params.denoiser.step == 0:
    AttributeError: 'CFGDenoiserParams' object has no attribute 'denoiser'

---
  5%|████▏                                                                              | 1/20 [00:01<00:21,  1.11s/it]*** Error executing callback cfg_denoiser_callback for C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Diffusion\stable-diffusion-webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
        c.callback(params)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 88, in animatediff_on_cfg_denoiser
        if cfg_params.denoiser.step == 0:
    AttributeError: 'CFGDenoiserParams' object has no attribute 'denoiser'

It appears to be something like three sections:

  1. I have no clue what type of error.
  2. Missing key(s) in state_dict
  3. AttributeError: 'CFGDenoiserParams' object has no attribute 'denoiser'

The wall of text I get when there's no image generated:

*** Error running before_process: C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Diffusion\stable-diffusion-webui\modules\scripts.py", line 611, in before_process
        script.before_process(p, *script_args)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 64, in before_process
        motion_module.inject(p.sd_model, params.model)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 112, in inject
        self._set_ddim_alpha(sd_model)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 178, in _set_ddim_alpha
        self.prev_alpha_cumprod_original = sd_model.alphas_cumprod_original
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'LatentDiffusion' object has no attribute 'alphas_cumprod_original'

---
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Diffusion\stable-diffusion-webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
        c.callback(params)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 88, in animatediff_on_cfg_denoiser
        if cfg_params.denoiser.step == 0:
    AttributeError: 'CFGDenoiserParams' object has no attribute 'denoiser'

---
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(bljdy8n9d4tj1tf)', 'woman', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000021DE9A35AB0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000021DE9A36800>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 410, in reduce
        return _apply_recipe(recipe, tensor, reduction_type=reduction)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 233, in _apply_recipe
        _reconstruct_from_shape(recipe, backend.shape(tensor))
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 198, in _reconstruct_from_shape_uncached
        raise EinopsError("Shape mismatch, can't divide axis of length {} in chunks of {}".format(
    einops.EinopsError: Shape mismatch, can't divide axis of length 2 in chunks of 16

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Diffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Diffusion\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\Diffusion\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "C:\Diffusion\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Diffusion\stable-diffusion-webui\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Diffusion\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "C:\Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
        x = layer(x)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 136, in forward
        return self.temporal_transformer(x)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 194, in forward
        hidden_states = block(hidden_states)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 248, in forward
        hidden_states = attention_block(norm_hidden_states) + hidden_states
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 337, in forward
        x = rearrange(x, "(b f) d c -> (b d) f c", f=video_length)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 487, in rearrange
        return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
      File "C:\Diffusion\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 418, in reduce
        raise EinopsError(message + '\n {}'.format(e))
    einops.EinopsError:  Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c".
     Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}.
     Shape mismatch, can't divide axis of length 2 in chunks of 16

It's maybe the same type of error form this thread: https://github.com/guoyww/AnimateDiff/issues/294 ... But I'm not sure about it. Besdies, that's not reseolved, anyway.

Version numbers from the bottom of the A1111 WebUI: "version: v1.6.1  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: N/A  •  gradio: 3.41.2  •  checkpoint: [15012c538f]

And now you have a whole banquet of erros to choose from! Serve yourself! Whatever is wrong with all of that? Is it something small and simple? Did I miss to install something?

kleethesama commented 1 week ago

I just had the same issue. Looked into it some more and I believe this was fixed in https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/f56cebf5ba24313447b2204c3f804379767201c9 so doing a fresh install of the newest A111 WebUI worked for me.