TMElyralab / Comfyui-MusePose

Other
287 stars 28 forks source link

mat1 and mat2 shapes cannot be multiplied #34

Open xxinlei opened 3 weeks ago

xxinlei commented 3 weeks ago

Anyone got the same error?? I've searched the existing issues and asked ChatGPT
How can I fix this? Thank you ><

[2024-06-08 00:36] 2024-06-08 00:36:48,912- root:179- ERROR- !!! Exception during processing!!! Error(s) in loading state_dict for CLIPVisionModelWithProjection:
    size mismatch for vision_model.embeddings.class_embedding: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).
    size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1024, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([768, 3, 32, 32]).
    size mismatch for vision_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([257, 1024]) from checkpoint, the shape in current model is torch.Size([50, 768]).
    size mismatch for vision_model.pre_layrnorm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

I added ignore_mismatched_sizes=True in the from_pretrained method but another error occurred

Exception during processing!!! mat1 and mat2 shapes cannot be multiplied (2x512 and 768x320)
2024-06-08 01:33:33,182- root:180- ERROR- Traceback (most recent call last):
  File "D:\User\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\nodes.py", line 791, in musepose_func
    return musepose(args, image_path, video)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\nodes.py", line 735, in musepose
    return handle_single(image_path, video)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\nodes.py", line 713, in handle_single
    video = pipe(
            ^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\musepose\pipelines\pipeline_pose2vid_long.py", line 467, in __call__
    self.reference_unet(
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\musepose\models\unet_2d_condition.py", line 1196, in forward
    sample, res_samples = downsample_block(
                          ^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\musepose\models\unet_2d_blocks.py", line 657, in forward
    hidden_states, ref_feature = attn(
                                 ^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\musepose\models\transformer_2d.py", line 356, in forward
    hidden_states = block(
                    ^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui-MusePose\musepose\models\mutual_self_attention.py", line 242, in hacked_basic_transformer_inner_forward
    attn_output = self.attn2(
                  ^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\attention_processor.py", line 512, in forward
    return self.processor(
           ^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\attention_processor.py", line 1231, in __call__
    key = attn.to_k(encoder_hidden_states, *args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\lora.py", line 430, in forward
    out = super().forward(hidden_states)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\User\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x512 and 768x320)