When I tried to run the example workflow, I got the following error
layers per block is 2
Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s]C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\feature_extractor
C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\image_encoder
Loading pipeline components...: 60%|███████████████████████████████████████████████████████████████ | 3/5 [00:00<00:00, 7.80it/s]C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\vae
C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\scheduler
The config attributes {'clip_sample': False, 'set_alpha_to_one': False, 'skip_prk_steps': True} were passed to EulerDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 11.32it/s]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 10.05it/s]
You have disabled the safety checker for <class 'ComfyUI-DragAnything.utils.dift_util.OneStepSDPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
!!! Exception during processing !!!
Traceback (most recent call last):
File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(*slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\nodes.py", line 524, in run
validation_control_images,ids_embedding,vis_images = get_condition(target_size=(height , width),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\nodes.py", line 308, in get_condition
keyframe_dift = extract_dift_feature(first_frame, dift_model=dift_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\nodes.py", line 292, in extract_dift_feature
dift_feature = dift_model.forward(img_tensor, prompt=prompt, up_ft_index=3,ensemble_size=8)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\utils\dift_util.py", line 214, in forward
prompt_embeds = self.pipe._encode_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable_2\python_embeded\Lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 283, in _encode_prompt
prompt_embeds = torch.cat([prompt_embeds_tuple[1], prompt_embeds_tuple[0]])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected Tensor as element 0 in argument 0, but got NoneType
When I tried to run the example workflow, I got the following error
layers per block is 2 Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s]C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\feature_extractor C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\image_encoder Loading pipeline components...: 60%|███████████████████████████████████████████████████████████████ | 3/5 [00:00<00:00, 7.80it/s]C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\vae C:\Comfy\ComfyUI_windows_portable_2\ComfyUI/custom_nodes/ComfyUI-DragAnything/pretrained_models/stable-video-diffusion-img2vid\scheduler The config attributes {'clip_sample': False, 'set_alpha_to_one': False, 'skip_prk_steps': True} were passed to EulerDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 11.32it/s] Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 10.05it/s] You have disabled the safety checker for <class 'ComfyUI-DragAnything.utils.dift_util.OneStepSDPipeline'> by passing
safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . !!! Exception during processing !!! Traceback (most recent call last): File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\nodes.py", line 524, in run validation_control_images,ids_embedding,vis_images = get_condition(target_size=(height , width), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\nodes.py", line 308, in get_condition keyframe_dift = extract_dift_feature(first_frame, dift_model=dift_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\nodes.py", line 292, in extract_dift_feature dift_feature = dift_model.forward(img_tensor, prompt=prompt, up_ft_index=3,ensemble_size=8) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\ComfyUI\custom_nodes\ComfyUI-DragAnything\utils\dift_util.py", line 214, in forward prompt_embeds = self.pipe._encode_prompt( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Comfy\ComfyUI_windows_portable_2\python_embeded\Lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 283, in _encode_prompt prompt_embeds = torch.cat([prompt_embeds_tuple[1], prompt_embeds_tuple[0]]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: expected Tensor as element 0 in argument 0, but got NoneTypePrompt executed in 4.60 seconds