guoyww / AnimateDiff

Official implementation of AnimateDiff.
https://animatediff.github.io
Apache License 2.0
9.97k stars 814 forks source link

EinopsError while trying to generate a pic #204

Open zeugme opened 9 months ago

zeugme commented 9 months ago

I get that error all the time while trying basic generations (although I use prompt travel)

The full error is: EinopsError: Error while processing rearrange-reduction pattern "(b f) c h w -> b c f h w". Input tensor shape: torch.Size([2, 320, 16, 64, 64]). Additional info: {'b': 2}. Expected 4 dimensions, got 5

Img2Img with a 512*512 pic, DDIM, 45 steps, denoising at 0.5, a starting seed

I tried with both Stabilized and sd_v14 format png 16 frames, batch size 16, fps 8, close loop (a), no interpolation Tried with and without Lora.

It used to work, but I have no idea what causes this.

Here's the full log:

Traceback (most recent call last): File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 410, in reduce return _apply_recipe(recipe, tensor, reduction_type=reduction) File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 233, in _apply_recipe _reconstruct_from_shape(recipe, backend.shape(tensor)) File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 163, in _reconstruct_from_shape_uncached raise EinopsError('Expected {} dimensions, got {}'.format(len(self.input_composite_axes), len(shape))) einops.EinopsError: Expected 4 dimensions, got 5

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Apps\A1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\Apps\A1111\stable-diffusion-webui\modules\call_queue.py", line 36, in f
    res = func(*args, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\img2img.py", line 208, in img2img
    processed = process_images(p)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\processing.py", line 732, in process_images
    res = process_images_inner(p)
  File "D:\Apps\A1111\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 119, in hacked_processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\processing.py", line 1528, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_samplers_timesteps.py", line 133, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
    return func()
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_samplers_timesteps.py", line 133, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_samplers_timesteps_impl.py", line 24, in ddim
    e_t = model(x, timesteps[index].item() * s_in, **extra_args)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 274, in mm_cfg_forward
    x_out = mm_sd_forward(self, x_in, sigma_in, cond_in, image_cond_in, make_condition_dict) # hook
  File "D:\Apps\A1111\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 188, in mm_sd_forward
    out = self.inner_model(x_in[_context], sigma_in[_context], cond=make_condition_dict(cond_in[_context], image_cond_in[_context]))
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_samplers_timesteps.py", line 30, in forward
    return self.inner_model.apply_model(input, timesteps, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
    return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 82, in forward
    x = layer(x, emb)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward
    return checkpoint(
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "D:\Apps\A1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 262, in _forward
    h = self.in_layers(x)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Apps\A1111\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 101, in groupnorm32_mm_forward
    x = gn32_original_forward(self, x)
  File "D:\Apps\A1111\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 100, in groupnorm32_mm_forward
    x = rearrange(x, "(b f) c h w -> b c f h w", b=2)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 487, in rearrange
    return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
  File "D:\Apps\A1111\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 418, in reduce
    raise EinopsError(message + '\n {}'.format(e))
einops.EinopsError:  Error while processing rearrange-reduction pattern "(b f) c h w -> b c f h w".
 Input tensor shape: torch.Size([2, 320, 16, 64, 64]). Additional info: {'b': 2}.
 Expected 4 dimensions, got 5
ajs commented 9 months ago

Seeing the same error. Using padding for prompt/negative prompt length and a 3 second video. Here are my parameters:

image