Closed rethink-studios closed 3 weeks ago
Is this with the very latest version? Kinda sounds like the clip_vision or the clip model is not correct.
yup yup..
using VitH:
but have options for other clips: Am I using the correct one?
That should be correct, but how about the CLIP model for the text encoder? That needs to be the 2.1 clip model, I do have helper nodes to autodownload the correct ones too:
Or this link: 2.1 clip: https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/text_encoder/model.safetensors?download=true
Hmm, I've reworked the workflow and now receiving this errors:
(snip) Prompt executed in 130.06 seconds got prompt '🔥 - 12 Nodes not included in prompt but is activated' CLIP: [a man blinking] Requested to load SD2ClipModel Loading 1 new model CLIP: [watermarks] VAE using dtype: torch.bfloat16 Requested to load CLIPVisionModelProjection Loading 1 new model !!! Exception during processing!!! mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x1024) Traceback (most recent call last): File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 394, in process img_emb = self.model.image_proj_model(cond_images) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\modules\encoders\resampler.py", line 136, in forward x = self.proj_in(x) ^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 116, in forward return F.linear(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x1024)
Prompt executed in 2.14 seconds
(snip)
Am I setting this up correctly??
Does my example workflows work for you? I'm not familiar with those encoding nodes in your workflow.
Checking.. give me 15mins (downloading models from examples, even though I have them, just in case)
The encoders I'm using are from DrLtData's ImpactPack WildCard Encoder, a nice alternative to WAS's
dynamicrafter_i2v_example_01.json working, but seeing weirdness in the render. (note the pillars) Any way to mask areas of influence?
tooncrafter_example_01.json (working as intended!)
Hi Kijai, I have the same run time error for DynamiCrafterI2V when I use dynamicrafter_i2v_example_01.jason. The tooncrafter_examples work. I just installed the tool so I'm guess it is the latest less than 5 hours ago.
Kijai, (you're amazing!!) I'm testing your implementation of ToonCrafter, and I'm receiving this error. How can I fix?
(snip)
Error occurred when executing DynamiCrafterI2V:
mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x1024)
File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 394, in process img_emb = self.model.image_proj_model(cond_images) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\modules\encoders\resampler.py", line 136, in forward x = self.proj_in(x) ^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 116, in forward return F.linear(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^