Open chaorenai opened 6 months ago
I have the same problem, EVA02-CLIP-L-14-336 does not automatically download from huggingface. Tip: “An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.” But I made sure the network is fine. But it won't download and I know where the downloaded path is!
same error
I placed the file I downloaded, EVA02_CLIP_L_336_psz14_s6B.pt, in the folder on my local machine at ~/.cache/huggingface/hub/models--QuanSun--EVA-CLIP/snapshots/11afd202f2ae80869d6cef18b1ec775e79bd8d12/. However, after running it, I still get the same error message................... It seems that downloading one file is not enough, there are many files to download, if the environment is not smooth, still can not use
metoo...
me too
I placed the file I downloaded, EVA02_CLIP_L_336_psz14_s6B.pt, in the folder on my local machine at ~/.cache/huggingface/hub/models--QuanSun--EVA-CLIP/snapshots/11afd202f2ae80869d6cef18b1ec775e79bd8d12/. However, after running it, I still get the same error message................... It seems that downloading one file is not enough, there are many files to download, if the environment is not smooth, still can not use
This method is actually possible, but it still requires a file, models -- QuanSun -- EVA CLIP/refs/main, with the file content being 11afd202f2ae80869d6cef18b1ec775e79bd8d12 for testing and can be used
I placed the file I downloaded, EVA02_CLIP_L_336_psz14_s6B.pt, in the folder on my local machine at ~/.cache/huggingface/hub/models--QuanSun--EVA-CLIP/snapshots/11afd202f2ae80869d6cef18b1ec775e79bd8d12/. However, after running it, I still get the same error message................... It seems that downloading one file is not enough, there are many files to download, if the environment is not smooth, still can not use
This method is actually possible, but it still requires a file, models -- QuanSun -- EVA CLIP/refs/main, with the file content being 11afd202f2ae80869d6cef18b1ec775e79bd8d12 for testing and can be used
thanks,It's working.
I saw a lot of EVA files in -QuanSun--EVA-CLIP. When I only downloaded EVA02_CLIP_L_336_psz14_s6B.pt and models -- QuanSun -- EVA CLIP/refs/main, I still got an error. Should I put all the files on huggingface? Can it be used after downloading everything?
Why not just put the chekpoint in the path /models/eva_clips and Load_EVA_Clip with checkpoint name list like ipadapter loader?
And models/facedection is already used by reactor, so I add the path
FACEDETECTION_DIR = os.path.join(folder_paths.models_dir, "facedetection") if "facedetection" not in folder_paths.folder_names_and_paths: current_paths = [FACEDETECTION_DIR] else: current_paths, _ = folder_paths.folder_names_and_paths["facedetection"] folder_paths.folder_names_and_paths["facedetection"] = (current_paths, folder_paths.supported_pt_extensions)
and add the model_rootpath to FaceRestoreHelper
face_helper = FaceRestoreHelper( upscale_factor=1, face_size=512, crop_ratio=(1, 1), det_model='retinaface_resnet50', save_ext='png', device=device, model_rootpath=FACEDETECTION_DIR )
My cache path is .cache/huggingface/hub/models--QuanSun--EVA-CLIP, just put download file in this folder
with pdb, the download codes is in ComfyUI/custom_nodes/PuLID_ComfyUI/eva_clip/pretrained.py
def download_pretrained_from_hf( model_id: str, filename: str = 'open_clip_pytorch_model.bin', revision=None, cache_dir: Union[str, None] = None, ): has_hf_hub(True) cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir) return cached_file
Loaded EVA02-CLIP-L-14-336 model config. Shape of rope freq: torch.Size([576, 64]) Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip). incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin']
that's fine
I placed the file I downloaded, EVA02_CLIP_L_336_psz14_s6B.pt, in the folder on my local machine at ~/.cache/huggingface/hub/models--QuanSun--EVA-CLIP/snapshots/11afd202f2ae80869d6cef18b1ec775e79bd8d12/. However, after running it, I still get the same error message................... It seems that downloading one file is not enough, there are many files to download, if the environment is not smooth, still can not use
This method is actually possible, but it still requires a file, models -- QuanSun -- EVA CLIP/refs/main, with the file content being 11afd202f2ae80869d6cef18b1ec775e79bd8d12 for testing and can be used
thanks,It's working.
I tried this, but it didn't work. The model keeps reporting errors.
Hello Matteo,
Thank you for your outstanding work—the nodes and tutorials have been incredibly helpful to me. I came across the PuLID video on YouTube, which was truly impressive. However, in the process of using this node, I encountered issues similar to those with EVA02-CLIP-L-14-336, and it seems there is currently no reliable solution (if there is already a solution, could you please let me know directly, as my English might not be good enough to miss it). Please fix it if possible—should PuLID be applicable to commercial processes, I can apply for sponsorship from my institution to help you focus more on development work. Thank you once again for everything you've done.
I also had the problem with eva clip loader getting stuck and not progressing. I manually downloaded it from here https://huggingface.co/QuanSun/EVA-CLIP/tree/main only the file EVA02_CLIP_L_336_psz14_s6B.pt i put it in the folder ComfyUI\models\clip after restarting comfyui various times, it decided to work.
https://zhuanlan.zhihu.com/p/697350603#/ this article helps!
重新在管理器中安装一下PuLID_ComfyUI 问题就解决了
I tried this, but it didn't work. The model keeps reporting errors.
download the model form huggingface
filename : EVA02_CLIP_L_336_psz14_s6B.pt
put it in /comfyui/models/LLM
:)
Loaded EVA02-CLIP-L-14-336 model config. Shape of rope freq: torch.Size([576, 64]) Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip). incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin']
Hi, is this message normal?
Let me just add,
1: The file is stored in the directory [ComfyUI.cache\huggingface\hub\models--QuanSun--EVA-CLIP],
2: There are two folders in the directory: snapshots and refs.
3: the [EVA02_CLIP_L_336_psz14_s6B. Pt] in [models - QuanSun - EVA - CLIP \ 11 afd202f2ae80869d6cef18b1ec775e79bd8d12 snapshots ] Within the directory,
4: [refs] a [main] file folder, remember that no suffix, this is an editable text file, inside the input for afd202f2ae80869d6cef18b1ec775e79bd8d12 [11]
This method is actually possible, but it still requires a file, models -- QuanSun -- EVA CLIP/refs/main, with the file content being 11afd202f2ae80869d6cef18b1ec775e79bd8d12 for testing and can be used
The EVA CLIP is EVA02-CLIP-L-14-336, but should be downloaded automatically (will be located in the huggingface directory).
I don't know why it can't be downloaded automatically. There is no manual download address either...
File "D:\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\pretrained.py", line 300, in download_pretrained_from_hf cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\sunny.conda\envs\comfyui\Lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\sunny.conda\envs\comfyui\Lib\site-packages\huggingface_hub\file_download.py", line 1406, in hf_hub_download raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.