MrForExample / ComfyUI-AnimateAnyone-Evolved

Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video
MIT License
450 stars 41 forks source link

Error occurred when executing [ComfyUI-3D] Animate Anyone Sampler: mat1 and mat2 shapes cannot be multiplied (2x1024 and 768x320) #21

Open grotcki opened 5 months ago

grotcki commented 5 months ago

File "/content/drive/MyDrive/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/drive/MyDrive/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/drive/MyDrive/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateAnyone-Evolved/nodes.py", line 152, in animate_anyone samples = diffuser( File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateAnyone-Evolved/src/models/main_diffuser.py", line 440, in call latents = self.denoise_loop( File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateAnyone-Evolved/src/models/main_diffuser.py", line 315, in denoise_loop self.reference_unet( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateAnyone-Evolved/src/models/unet_2d_condition.py", line 1197, in forward sample, res_samples = downsample_block( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateAnyone-Evolved/src/models/unet_2d_blocks.py", line 657, in forward hidden_states, ref_feature = attn( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateAnyone-Evolved/src/models/transformer_2d.py", line 357, in forward hidden_states = block( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateAnyone-Evolved/src/models/mutual_self_attention.py", line 241, in hacked_basic_transformer_inner_forward attn_output = self.attn2( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 527, in forward return self.processor( File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 1246, in call key = attn.to_k(encoder_hidden_states, args) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/lora.py", line 430, in forward out = super().forward(hidden_states) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py", line 116, in forward return F.linear(input, self.weight, self.bias)

zymox commented 5 months ago

Use the right ClipVision https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/issues/17

DougPP commented 4 months ago

Use the right ClipVision #17

I followed the instruction linked, and placed the clip vision in the correct folder "ComfyUI\models\clip_vision\SD1.5" but still receive the same error message.