guoyww / AnimateDiff

Official implementation of AnimateDiff.
https://animatediff.github.io
Apache License 2.0
10.51k stars 868 forks source link

HINT: We don't support broadcasting, please use `expand` yourself before calling `memory_efficient_attention` #303

Open nianchu1 opened 7 months ago

nianchu1 commented 7 months ago

ValueError: Incompatible shapes for attention inputs: query.shape: torch.Size([32, 14400, 8, 40]) key.shape : torch.Size([2, 77, 8, 40]) value.shape: torch.Size([2, 77, 8, 40]) HINT: We don't support broadcasting, please use expand yourself before calling memory_efficient_attention if you need to

When i use animatediff ,it occur above quesiton.what does it mens?

qooyvonne commented 7 months ago

I also have the same problem when i use "Reference" in ControlNet v1.1.443.

ValueError: Incompatible shapes for attention inputs: query.shape: torch.Size([2, 7225, 8, 40]) key.shape : torch.Size([1, 77, 8, 40]) value.shape: torch.Size([1, 77, 8, 40]) HINT: We don't support broadcasting, please use expand yourself before calling memory_efficient_attention if you need to

DenSckriva commented 6 months ago

Same problem for me with Tiled Diffusion & VAE extension. :(

zhichaoLii commented 4 months ago

Same problem for me with Tiled Diffusion & VAE extension. :(

Have you solve the problem in the multidiffusion webui extension?

Wangbenzhi commented 4 months ago

I solve this issue by setting drop_last=False in the dataloder.

liyuantsao commented 3 months ago

Have the same question.

jxrnxx commented 3 months ago

有同样的疑问。

zz13526585541 commented 2 months ago

当我在 ControlNet v1.1.443 中使用“参考”时,我也遇到了同样的问题。

ValueError:注意力输入的形状不兼容:query.shape:torch。size([2, 7225, 8, 40]) key.shape : 火把。size([1, 77, 8, 40]) value.shape: 火炬。Size([1, 77, 8, 40])提示:我们不支持广播,如果需要,请在致电前自行使用expand``memory_efficient_attention

In optimization Settings :[PR] reverse cue word leads to minimum sigma (when the image is close to completion...) Just change the value to 0