Open hongchenfengfan8888 opened 3 months ago
![Uploading Snipaste_2024-07-13_13-57-35.png…]()
I've already updated my comfyui to the most recent version, yet I'm still encountering this error, even though all the required dependencies are up to spec
Error occurred when executing DiffSynthSampler:
permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 4 is not equal to len(dims) = 3
File "H:\AI\ComfyUI_windows\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\AI\ComfyUI_windows\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\AI\ComfyUI_windows\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\AI\ComfyUI_windows\ComfyUI\custom_nodes\ComfyUI-DiffSynthWrapper\nodes.py", line 114, in process video = pipe( ^^^^^ File "H:\AI\ComfyUI_windows\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "H:\AI\ComfyUI_windows\ComfyUI\custom_nodes\ComfyUI-DiffSynth-Studio-main\diffsynth\pipelines\stable_video_diffusion.py", line 158, in call image_emb_clip_posi = self.encode_image_with_clip(input_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\AI\ComfyUI_windows\ComfyUI\custom_nodes\ComfyUI-DiffSynth-Studio-main\diffsynth\pipelines\stable_video_diffusion.py", line 52, in encode_image_with_clip image = self.preprocess_image(image).to(device=self.device, dtype=self.torch_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\AI\ComfyUI_windows\ComfyUI\custom_nodes\ComfyUI-DiffSynth-Studio-main\diffsynth\pipelines\stable_video_diffusion.py", line 40, in preprocess_image image = torch.Tensor(np.array(image, dtype=np.float32) (2 / 255) - 1).permute(2, 0, 1).unsqueeze(0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^