MinusZoneAI / ComfyUI-Kolors-MZ

Kolors的ComfyUI原生采样器实现(Kolors ComfyUI Native Sampler Implementation)
GNU General Public License v3.0
495 stars 26 forks source link

IPadapter issue when loading new clip vision model #32

Closed tristan22mc closed 4 months ago

tristan22mc commented 4 months ago

Im having an error when loading the new clip vision model to input into my IPadapter. The error is as follows:

rror occurred when executing CLIPVisionLoader:

Error(s) in loading state_dict for CLIPVisionModelProjection: size mismatch for vision_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([577, 1024]) from checkpoint, the shape in current model is torch.Size([257, 1024]).

File "C:\Users\trist\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\trist\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\trist\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\trist\Documents\ComfyUI_windows_portable\ComfyUI\nodes.py", line 892, in load_clip clip_vision = comfy.clip_vision.load(clip_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\trist\Documents\ComfyUI_windows_portable\ComfyUI\comfy\clip_vision.py", line 117, in load return load_clipvision_from_sd(sd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\trist\Documents\ComfyUI_windows_portable\ComfyUI\comfy\clip_vision.py", line 101, in load_clipvision_from_sd m, u = clip.load_sd(sd) ^^^^^^^^^^^^^^^^ File "C:\Users\trist\Documents\ComfyUI_windows_portable\ComfyUI\comfy\clip_vision.py", line 46, in load_sd return self.model.load_state_dict(sd, strict=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\trist\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

I downloaded the clip vision model loaded at this link: https://huggingface.co/Kwai-Kolors/Kolors-IP-Adapter-Plus/blob/main/image_encoder/pytorch_model.bin

Renamed the model and added it to my clip vision folder. I also tried updating my IPadapter plus node by cubiq but still no luck

wailovet commented 4 months ago

Reference: https://github.com/MinusZoneAI/ComfyUI-Kolors-MZ/blob/main/examples/workflow_ipa.png Using MZ_ KolorsCLIPVisionLoader Node