banodoco / Steerable-Motion

A ComfyUI node for driving videos using batches of images.
Other
826 stars 40 forks source link

size mismatch #72

Closed zhaowtVincent closed 4 months ago

zhaowtVincent commented 4 months ago

[rgthree] Using rgthree's optimized recursive execution. Prompt executor has been patched by Job Iterator! 2024-05-20 11:10:16,933- root:71- INFO- model_type EPS 2024-05-20 11:10:17,711- root:272- INFO- Using xformers attention in VAE 2024-05-20 11:10:17,713- root:272- INFO- Using xformers attention in VAE 2024-05-20 11:10:18,701- root:416- INFO- Requested to load SD1ClipModel 2024-05-20 11:10:18,702- root:426- INFO- Loading 1 new model 2024-05-20 11:10:28,539- root:416- INFO- Requested to load CLIPVisionModelProjection 2024-05-20 11:10:28,539- root:426- INFO- Loading 1 new model 2024-05-20 11:10:29,820- root:179- ERROR- !!! Exception during processing!!! Error(s) in loading state_dict for ResamplerImport: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). 2024-05-20 11:10:29,822- root:180- ERROR- Traceback (most recent call last): File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\Steerable-Motion\SteerableMotion.py", line 568, in combined_function model, _ = ipadapter_application.apply_ipadapter(model=model, ipadapter=ipadapter, image=torch.cat(bin.imageBatch, dim=0), weight=[x * base_ipa_advanced_settings["ipa_weight"] for x in bin.weight_schedule], weight_type=base_ipa_advanced_settings["ipa_weight_type"], start_at=base_ipa_advanced_settings["ipa_starts_at"], end_at=base_ipa_advanced_settings["ipa_ends_at"], clip_vision=clip_vision,image_negative=negative_noise,embeds_scaling=base_ipa_advanced_settings["ipa_embeds_scaling"], encode_batch_size=1, image_schedule=bin.image_schedule) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\Steerable-Motion\imports\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 686, in apply_ipadapter image = image if isinstance(image, list) else [image] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\Steerable-Motion\imports\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 334, in ipadapter_execute if img_comp_cond_embeds is not None: File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\Steerable-Motion\imports\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 69, in init File "K:\software\ComfyU\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for ResamplerImport: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).

zhaowtVincent commented 4 months ago

All settings are default. I am running the creative_interpolation_example in the demo directory.

KewkLW commented 4 months ago

Check if your images are all the exact same dimensions. If they are, are you using the batch input or the load images inputs?

peteromallet commented 4 months ago

Check if your images are all the exact same dimensions. If they are, are you using the batch input or the load images inputs?

Kewk's is correct - your images look like different sizes. The current workflow should resize them though!