cubiq / ComfyUI_IPAdapter_plus

GNU General Public License v3.0
3.2k stars 240 forks source link

Error occurred when executing IPAdapter: insightface model is required for FaceID models #401

Closed a-niess closed 2 months ago

a-niess commented 3 months ago

I have installed all the models, FaceID, LoRAs, etc. in the proper directories per the Installation instructions. I am only working with SDXL and have omitted the SD1.5 installations.

I have also installed insightface via pip install insightface and onnxruntime via pip install onnxruntime and pip install onnxruntime-gpu within the /workspace/ComfyUI directory. I am running ComfyUI via RunPod with the ComfyUI - Jupyter template.

Screenshot 2024-04-05 at 6 52 48 PM

I appreciate any help others may be able to offer on this error:

Error occurred when executing IPAdapter:

insightface model is required for FaceID models

File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 569, in apply_ipadapter
return (ipadapter_execute(model.clone(), ipadapter['ipadapter']['model'], ipadapter['clipvision']['model'], **ipa_args), )
File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 181, in ipadapter_execute
raise Exception("insightface model is required for FaceID models")
a-niess commented 3 months ago

The error is defined in ComfyUI_IPAdapter_plus/IPAdapterPlus.py here in lines 180–181:

 if is_faceid and not insightface:
        raise Exception("insightface model is required for FaceID models")
TrickleDownClown commented 3 months ago

I continue to have this same issue. Even after all the suggested fixes (#162) I am unsure how to proceed.

cubiq commented 3 months ago

FaceID models require the FaceID IPadapter node. You are probably just using the wrong node

deniansouza commented 3 months ago

The exact same thing happend to me. Turns out I just needed to restart ComfyUI (I had installed everything while it was running).

TrickleDownClown commented 3 months ago

FaceID models require the FaceID IPadapter node. You are probably just using the wrong node

I double checked and I am using that node. It seems the install of Insight is missing but it's installed, the appears to be correct and I used the onnxruntime to add to the wheel. as per the #162

cubiq commented 3 months ago

do you get any error at startup

Anti-Liang commented 3 months ago

In my case, download InsightFace model will solve this problem.

Noctropolitan commented 2 months ago

I'm having exactly the same error. Any ideas?

cubiq commented 2 months ago

I'm closing this as it looks like a configuration issue and not a bug

a-niess commented 2 months ago

The errors I get in my RunPod environment are resolved by using the ComfyUI Manager Install PIP packages button and completing the following installations:

pip install pillow==10.2.0 insightface onnxruntime onnxruntime-gpu

Afterward, I do not get any issues.

philcoraz commented 2 months ago

At a-niess it worked thx a lot

a-niess commented 2 months ago

@philcoraz I'm glad it worked for you! There's a way to configure your RunPod so that these installations will persist in your python virtual environment, but doing the installs through the ComfyUI Manager each time is pretty straightforward.

jgal14 commented 2 months ago

@a-niess could you confirm which RunPod template you are using please? The install pip packages approach is giving me this error;

Failed to install 'pip install pillow==10.2.0 insightface onnxruntime onnxruntime-gpu' See terminal log.

a-niess commented 2 months ago

@jgal14 I am using the ghcr.io/ai-dock/comfyui:latest-jupyter template. The template used to be offered by default on RunPod but was removed some time around last week. If this is the template you are using, the closest equivalent is ghcr.io/ai-dock/comfyui:latest which RunPod calls ComfyUI - AI-Dock, so you can just edit the template by appending -jupyter to the end of the title for the Container Image and then confirm with "Set Overrides." I am also using a network volume.

jgal14 commented 2 months ago

@jgal14 I am using the ghcr.io/ai-dock/comfyui:latest-jupyter template. The template used to be offered by default on RunPod but was removed some time around last week. If this is the template you are using, the closest equivalent is ghcr.io/ai-dock/comfyui:latest which RunPod calls ComfyUI - AI-Dock, so you can just edit the template by appending -jupyter to the end of the title for the Container Image and then confirm with "Set Overrides." I am also using a network volume.

Awesome thanks! I'll check this out.

jgal14 commented 2 months ago

@a-niess I set that template up and opened the port and it's asking for a login. Where do I create these credentials?

jgal14 commented 2 months ago

scratch that, found it!

a-niess commented 2 months ago

@jgal14 The default with this container image is

user password

and can be configured to something else.

jgal14 commented 2 months ago

@jgal14 The default with this container image is

user password

and can be configured to something else.

Thank you again

jgal14 commented 2 months ago

@a-niess did you use a custom image to get the IP-adapter running on this template? Models I'm adding to the standard file structure aren't showing in workflows!

a-niess commented 2 months ago

@jgal14 When you connect to the Jupyter notebook via Connect to HTTP Service [Port 8888]:

  1. launch a new terminal
  2. cd into the appropriate directory for where you want to add models. E.g., if you're adding a LoRA then cd ComfyUI/models/loras
  3. copy the download URL of the model from its source. E.g., if on CivitAI or HF, copy the right-click on the download button/link for the model, then copy link address
  4. back in the jupyter notebook terminal once you are in the appropriate directory from step 1, wget paste-model-URL E.g. wget https://civitai.com/api/download/models/135867 will download the file add-detail-xl.safetensors from the Detail Tweaker XL page on CivitAI.
  5. In some cases, the model will only download as 135867 without a .ckpt or .safetensors extension. In that case, rename the file with the extension so that it will appear in your ComfyUI nodes. To do so in the same terminal and directory, mv current-name target-name E.g. mv 135867 add-detail-xl.safetensors

Note: You can always use ls to see the contents of the current directory (not sure how familiar you may be with command line.

Models installed as above will only persist if you are using a network volume in RunPod. If you are not using a network volume (found under RunPod's Storage tab on the left), then these models will not be there next time you spin up your pod.

jgal14 commented 2 months ago

@jgal14 When you connect to the Jupyter notebook via Connect to HTTP Service [Port 8888]:

  1. launch a new terminal
  2. cd into the appropriate directory for where you want to add models. E.g., if you're adding a LoRA then cd ComfyUI/models/loras
  3. copy the download URL of the model from its source. E.g., if on CivitAI or HF, copy the right-click on the download button/link for the model, then copy link address
  4. back in the jupyter notebook terminal once you are in the appropriate directory from step 1, wget paste-model-URL E.g. wget https://civitai.com/api/download/models/135867 will download the file add-detail-xl.safetensors from the Detail Tweaker XL page on CivitAI.
  5. In some cases, the model will only download as 135867 without a .ckpt or .safetensors extension. In that case, rename the file with the extension so that it will appear in your ComfyUI nodes. To do so in the same terminal and directory, mv current-name target-name E.g. mv 135867 add-detail-xl.safetensors

Note: You can always use ls to see the contents of the current directory (not sure how familiar you may be with command line.

Models installed as above will only persist if you are using a network volume in RunPod. If you are not using a network volume (found under RunPod's Storage tab on the left), then these models will not be there next time you spin up your pod.

Awesome thank you for the detailed response, this is up and running now.

Now, onto a serverless deployment!