kijai / ComfyUI-MimicMotionWrapper

Apache License 2.0
319 stars 25 forks source link

OSError: Error no file named diffusion_pytorch_model.fp16.bin found in directory #6

Open laogu1027 opened 4 months ago

laogu1027 commented 4 months ago

Which model should be placed?

svd_xt_1_1.safetensors in ComfyUI\models\diffusers is error

THX

kijai commented 4 months ago

It needs to be the diffusers version, so all this: image

And in each folder the .json files, and the fp16 .safetensors file, for example the unet: image

This should all be automatically done by the loader node, is that not working for you?

laogu1027 commented 4 months ago

OKOK.Just connection problem. but useing your orignal workflow: retrieve_timesteps() takes from 1 to 4 positional arguments but 5 were given

kijai commented 4 months ago

OKOK.Just connection problem. but useing your orignal workflow: retrieve_timesteps() takes from 1 to 4 positional arguments but 5 were given

Need to update your diffusers version.

xyff83 commented 4 months ago

微信截图_20240704165615 这个报错撒意思?

laogu1027 commented 4 months ago

Thank u,have done

laogu1027 commented 4 months ago

网络问题,兄弟

winniewlx commented 21 hours ago

Error occurred when executing DownloadAndLoadMimicMotionModel:

Error no file named pytorch_model.fp16.bin, model.fp16.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory C:\Users\30759\Desktop\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1.

File "C:\Users\30759\Desktop\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\30759\Desktop\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\30759\Desktop\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\30759\Desktop\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper\nodes.py", line 134, in loadmodel self.image_encoder = CLIPVisionModelWithProjection.from_pretrained(svd_path, subfolder="image_encoder", variant="fp16", low_cpu_mem_usage=True).to(dtype).to(device).eval() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\30759\Desktop\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3460, in from_pretrained raise EnvironmentError(