MinusZoneAI / ComfyUI-Kolors-MZ

Kolors的ComfyUI原生采样器实现(Kolors ComfyUI Native Sampler Implementation)
GNU General Public License v3.0
439 stars 21 forks source link

The size of tensor a (577) must match the size of tensor b (257) at non-singleton dimension 1 #52

Open coolgech1978 opened 1 month ago

coolgech1978 commented 1 month ago

Error occurred when executing MZ_IPAdapterAdvancedKolors:

The size of tensor a (577) must match the size of tensor b (257) at non-singleton dimension 1

File "/home/chawk/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/custom_nodes/ComfyUI-Kolors-MZ/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 786, in apply_ipadapter work_model, face_image = ipadapter_execute(work_model, ipadapter_model, clip_vision, ipa_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/custom_nodes/ComfyUI-Kolors-MZ/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 334, in ipadapter_execute img_cond_embeds = encode_image_masked(clipvision, image, batch_size=encode_batch_size, size=image_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/custom_nodes/ComfyUI-Kolors-MZ/ComfyUI_IPAdapter_plus/utils.py", line 177, in encode_image_masked out = clip_vision.model(pixel_values=pixel_values, intermediate_output=-2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/comfy/clip_model.py", line 192, in forward x = self.vision_model(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/comfy/clip_model.py", line 178, in forward x = self.embeddings(pixel_values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chawk/ComfyUI/comfy/clip_model.py", line 160, in forward return torch.cat([self.class_embedding.to(embeds.device).expand(pixel_values.shape[0], 1, -1), embeds], dim=1) + self.position_embedding.weight.to(embeds.device)

wailovet commented 1 month ago

Show me a screenshot of your workflow.

coolgech1978 commented 1 month ago

The workflow I tried is as follows: workflow kolors_ipa_workflow1.json jsion file form [ComfyUI-Kolors-MZ]

workflow (2) ipadapter_kolors_example_in_ipadapter_plus.json jsion file form ComfyUI_IPAdapter_plus last version

NyxWeigh commented 1 month ago

same error here

YacratesWyh commented 1 month ago

same

YacratesWyh commented 1 month ago

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

wailovet commented 1 month ago

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

Support for 336clip in comfyui has actually been merged, see https://github.com/comfyanonymous/ComfyUI/pull/4042

This problem may only occur because the comfyui version is not the latest.

yatoubusha commented 1 month ago

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

i uesed the MZ cliploader, the error still exit. image

yatoubusha commented 1 month ago

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

Support for 336clip in comfyui has actually been merged, see comfyanonymous/ComfyUI#4042

This problem may only occur because the comfyui version is not the latest.

i update the comfyui version and restart, the error is still exit, should i load "openai/clip-vit-large-patch14-336 " as image encoder?

wailovet commented 1 month ago

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

Support for 336clip in comfyui has actually been merged, see comfyanonymous/ComfyUI#4042 This problem may only occur because the comfyui version is not the latest.

i update the comfyui version and restart, the error is still exit, should i load "openai/clip-vit-large-patch14-336 " as image encoder?

The clip encoder should be taken from pytorch_model.bin

This is the official model repository of KolorsIPA.

yatoubusha commented 1 month ago

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

i uesed the MZ cliploader, the error still exit. image

solved, you can use "workflow_ipa_legacy.png" workflow. All related loaders must be changed to the author's own

coolgech1978 commented 1 month ago

there are some differentiation between two files that named 'pytorch_model.bin', one is download form huggingface,the other is form hf-mirror