cubiq / ComfyUI_IPAdapter_plus

GNU General Public License v3.0
3.84k stars 292 forks source link

:lady_beetle: Common issues. Please read! #108

Open cubiq opened 9 months ago

cubiq commented 9 months ago

Before posting a new issue, please check the currently opened and closed issues! Very likely the solution is already there!

The most common causes for issues are:

:arrow_forward: Outdated ComfyUI and/or Extension

Always update ComfyUI and the IPAdapter extension to the latest version. If you are on windows you may need to re-download a new portable version or use the update scripts!

After the update always stop ComfyUI and restart it. Then refresh the browser a couple of times to clear the cache. If it doesn't work try to recreate the updated nodes.

All the following errors are caused by an outdated installation:

:arrow_forward: IPAdapter, InstantID, PuLID interoperability

The three technologies are very close together and share some common code. Be sure to upgrade all of them before reporting an issue.

:arrow_forward: Delete the old Deprecated IPAdapter extension

You may have already installed the deprecated IPAdapter_ComfyUI extension. That will conflict with this extension and needs to be removed.

:arrow_forward: Can't find the IPAdapterApply node anymore

The IPAdapter Apply node is now replaced by IPAdapter Advanced. It's a drop in replacement, remove the old one and reconnect the pipelines to the new one.

:arrow_forward: size mismatch for proj_in.weight: copying a param with shape torch.Size([..., ...]) from checkpoint, the shape in current model is torch.Size([..., ...])

Any tensor size mismatch error is caused by the wrong combination of IPAdapter model, image encoder and/or base checkpoint.

All -vit-h models require the SD1.5 image encoder. At the moment only one SDXL model and the vit-G SD1.5 model need the bigger image encoder.

:arrow_forward: Insightface is required for FaceID models

If you use any FaceID model you need either use the simple IPAdapter node or the dedicated IPAdapter FaceID node.

:arrow_forward: Can't find the saved embeddings

The embeddings are saved into output directory and need to be moved into the input directory to be loaded.

:arrow_forward: Mismatched image encoders / Black image / No result at all / 'NoneType' Error

When you download the encoders from huggingface they both have the same name (model.safetensors). Please be sure to rename them correctly (for sd1.5 and sdxl) and use the right one depending on the IPAdapter that you use.

All IPAdapter models use the "SD1.5" image encoder (no matter the target checkpoint) except for one SDXL model and models ending with vit-G.

:arrow_forward: Dtype mismatch

If you get errors like:

Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.

Run ComfyUI with --force-fp16

Arethkiotis commented 6 months ago

Anyone has any idea about fixing the bellow error? Both ComfyUI and IPAdapter are updated and tried multiple restarts as well.

Error occurred when executing IPAdapterApply:

'NoneType' object has no attribute 'patcher'

File "E:\IUmnage\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IUmnage\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IUmnage\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IUmnage\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 751, in apply_ipadapter clip_embed = encode_image_masked(clip_vision, image, clip_vision_mask) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IUmnage\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 270, in encode_image_masked comfy.model_management.load_model_gpu(clip_vision.patcher) ^^^^^^^^^^^^^^^^^^^

BlankRandom-A commented 6 months ago

Hello .. Anyone have an idea how to resolve this error please? 1

fingerx commented 5 months ago

I have the same as @crimpproduction but I used Ubuntu OS. And also update all ComfyUI to the newest version.

i have the same as your

VHS-1 commented 5 months ago

Error occurred when executing IPAdapterApply:

InsightFace must be provided for FaceID models.

File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 698, in apply_ipadapter raise Exception('InsightFace must be provided for FaceID models.')

MohammadAboulEla commented 5 months ago

Error occurred when executing IPAdapterApply:

InsightFace must be provided for FaceID models.

File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 698, in apply_ipadapter raise Exception('InsightFace must be provided for FaceID models.')

same error here

chimelea666 commented 5 months ago

Ask for help. when using iPadapter unified loader,I downloaded the models and renamed them as FacelD,FacelD Plus,FacelD Plus v2,FacelD Portrait. But which folder should them be put in? I tried several folders, but it did not work. It prompts Error occurred when executing IPAdapterUnifiedLoaderFaceID:

ClipVision model not found.

File "E:\comfyui-auto\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyui-auto\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyui-auto\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyui-auto\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 373, in load_models raise Exception("ClipVision model not found.")

Thanks.

LukeG89 commented 5 months ago

Ask for help. when using iPadapter unified loader,I downloaded the models and renamed them as FacelD,FacelD Plus,FacelD Plus v2,FacelD Portrait. But which folder should them be put in? I tried several folders, but it did not work. It prompts Error occurred when executing IPAdapterUnifiedLoaderFaceID:

ClipVision model not found.

File "E:\comfyui-auto\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyui-auto\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyui-auto\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyui-auto\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 373, in load_models raise Exception("ClipVision model not found.")

Thanks.

@chimelea666 Check this: https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/313

MichaelKe1985 commented 5 months ago

Helo, Ask for help! Where i can get files and fix these problem, thanks.

p1

p2

Fearblade66 commented 5 months ago

Helo, Ask for help! Where i can get files and fix these problem, thanks.

p1

p2

I also have the same issue.. no idea why. and i have the ipadapter models downloaded

LukeG89 commented 5 months ago

@MichaelKe1985 @Fearblade66 Are your models located in ComfyUI/models/ipadapter? If not, move all your ipadapter models there. Also @MichaelKe1985, you don't need to connect the IPAdapter Model Loader to the IPAdaper Unified Loader, the Unified will load all the needed models by itself.

spookandrey1 commented 5 months ago

@MichaelKe1985 @Fearblade66 Are your models located in ComfyUI/models/ipadapter? If not, move all your ipadapter models there. Also @MichaelKe1985, you don't need to connect the IPAdapter Model Loader to the IPAdaper Unified Loader, the Unified will load all the needed models by itself.

image Снимок экрана 2024-03-28 141955

SpanishHearts commented 5 months ago

Same error here, all the models are properly stored in the corresponding folders the error is "IPAdapter model not found. " image

Triadasoul commented 5 months ago

Is there a way to install previous version, because somehow I can't reproduce the results of the previous version (somehow it has a different detailing and strength paramaters impact) and I can't get those results I got and I need to reproduce? The most likely to previous ip-adapter/clipvision result I get from Vit-G(medium streng), but still it has a bit different results, May be there are some set of parameters which make it work as previous default SDXL version ?

Herambnaik commented 5 months ago

image facing this issue. please help.

LukeG89 commented 5 months ago

@Herambnaik

▶️ Insightface is required for FaceID models If you use any FaceID model you need either use the simple IPAdapter node or the dedicated IPAdapter FaceID node.

Herambnaik commented 5 months ago

image already using insight face but not working

LukeG89 commented 5 months ago

@Herambnaik You have to replace the node you are using with IPAdapter FaceID

stevehooker commented 5 months ago

Been at this for two hours now. All files are in the correct place, installed and reinstalled the custom_node, updated Comfy, stopped, restarted... Tried all versions, SD1.5 and SDXL. I'm using the correct example work flow. But, still... File "F:\Data\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Data\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Data\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Data\Packages\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 393, in load_models raise Exception("IPAdapter model not found.") Exception: IPAdapter model not found.

I'm on Windows 11, and using Stability Matrix so my path to the models, maybe the issue? It's the only thing, I can think of. F:\Data\Packages\ComfyUI\models\ipadapter

In the meantime, reverting back to the older version.

image image image image
stevehooker commented 5 months ago

In the meantime, reverting back to the older version.

I have reverted, I can see the old loader and the Apply IP Adapter -- All looks good, but I cannot load any of the IPAdapter models. Of course, I've copied over the whole lot, back into custom_nodes\ComfyUI_IPAdapter_plus\models... Just like they used to be! But, no I cannot use the loader, the drop-down list, doesn't drop, I cannot select anything!

Both versions, now a screwed for me. I hope this is some kind of clue, for you.

image
stevehooker commented 5 months ago

Solved! Stability Matrix, was indeed the problem, with both old and new, I don't[t know why with the old, but hey-ho!

I just needed to move my models to Data\Models\IpAdapter!!!! As instructed by extra_model_paths.yaml

All good now : -)

image
52Hz-p commented 5 months ago

Error occurred when executing IPAdapterUnifiedLoaderFaceID:

LoRA model not found.你好,请问我在使用IPAdapter Unified Loader FaceID节点的时候报错显示找不到lora模型,但是我的lora模型都是在的

lasersees commented 5 months ago

i am having the error

"▶️ Dtype mismatch with older GPUs If you get errors like:

Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.

Run ComfyUI with --force-fp16"

but where do i put the "--force-fp16" in the "main.py"? i have downloaded comfy ui through pinokio and i can't find any file with that type of name where i can put the parameter and i have fount the "nvidia_gpu" and put the parameter in there but it showing the same error while processing in the ksampler. if anyone have the solution please help.

xc412 commented 5 months ago

Error occurred when executing IPAdapterInsightFaceLoader: Unable to import dependency onnxruntime.

DarkAlchy commented 5 months ago

I quickly scanned this and did not see this. If I knew where this issue was at I would just add a very simple typecast to hope that is the only issue, and if not go from there. By quickly scanning I mean I just did a search for bfloat and the one mentioned here ain't it if you read the error message. My message is saying it wants it in half while I am giving it a bfloat if I am reading the error message correctly.

TripleHeadedMonkey commented 5 months ago

I quickly scanned this and did not see this. If I knew where this issue was at I would just add a very simple typecast to hope that is the only issue, and if not go from there.

"Typecast" you say....?

You mean to say that the words "Just change to FP16 manually" aren't a real solution? And that the actual solution is as simple as having it automatically set itself to FP16, bypassing the overall settings and maintaining the users preferences?

Man... It sounds like such a simple fix that the only reason I wouldn't implement it myself is because I have coding dyslexia.

Would be nice if your previous issue wasn't closed without providing any useful information to help you achieve this fix yourself... Good luck trying to fix these issues with no real support beyond "don't bother" :D

I don't know many people who use BF16 UNET, but that hasn't stopped me from making sure that my models are all compatible with both BF16 and fp16 as a matter of course. The ONLY reason I can see not to do so, would be because GTX 16xx GPU users would be excluded from using it if it forced FP16 and would not allow for float32.

After addressing the elephant in the room, Great work though on the overall development, Cubiq.

cubiq commented 5 months ago

type cast issues can be generally solved with --force-fp16 which is anyway a good idea in general. I tried to cover the most common scenarios but I can't predict them all. Also people rarely give enough info to understand where the problem actually lies, so force-fp16 is generally the easiest solution.

DarkAlchy commented 5 months ago

I gave you precisely what I was given. I can't give you more. Hit generate screwed. force fp16 is not a good idea as forcing anything is a bad idea when bf16 exists. Same bit count, and neither fp16 nor bf16 is allowed for cuda cpu. If I knew where you were casting to fp16 before it heads off to the ksampler I would just slap a typecast and be done with it, bu I don't wish to hunt down where it might be happening when you could just as easily have a dropdown box to allow fp16 or bf16 (if comfy doesn't let the nodes know what someone is using).

cubiq commented 5 months ago

the node already checks what fp you are using and updates accordingly. there are many places where things can go wrong in the cross attention. If you are able to understand where it precisely happens let me know, for the moment you can use fp16 which is the default anyway

DarkAlchy commented 5 months ago

That is the point, I don't precisely know where it happens in your code as it happens right at the moment it hits the ksampler. I remove your node and go directly to the model instead, it works. ksampler is expecting bf16 and you are passing to it .half (fp16) so the first place I would check is that comfy is telling you the truth that I am in bf16 because your code is sending fp16 even though I am in bf16 which is what the ksampler is expecting.

TripleHeadedMonkey commented 5 months ago

FYI, just the same as disabling half precision and upcasting to FP32 is recommended as a "solution" for A1111 and KohyaSS users, it is actually what is called a "workaround" and for some people its one that simply isn't viable.

Especially not for power users.

SDXL's best performance comes from BF16, not FP16. Having to switch to another precision to run the nodes, while only an inconvenience for most casual users can be extremely debilitating to power users.

DarkAlchy commented 5 months ago

Here is the issue, as I previously mentioned to head to so I did.

def ipadapter_execute(model,
                      ipadapter,
                      clipvision,
                      insightface=None,
                      image=None,
                      image_composition=None,
                      image_negative=None,
                      weight=1.0,
                      weight_faceidv2=None,
                      weight_type="linear",
                      combine_embeds="concat",
                      start_at=0.0,
                      end_at=1.0,
                      attn_mask=None,
                      pos_embed=None,
                      neg_embed=None,
                      unfold_batch=False,
                      embeds_scaling='V only'):
    dtype = torch.float16 if model_management.should_use_fp16() else torch.bfloat16 if model_management.should_use_bf16() else torch.float32
    print("HERE:",dtype)
    device = model_management.get_torch_device()

image

image

cubiq commented 5 months ago

above that line there's a model_management.should_use_bf16() condition that passes false for some reason.

DarkAlchy commented 5 months ago

above that line there's a model_management.should_use_bf16() condition that passes false for some reason.

Correct. I can hard set this, but my suspicion is Comfy has a bug and is not telling the truth.

cubiq commented 5 months ago

at the very least I need:

DarkAlchy commented 5 months ago

@echo off

:: Deactivate the virtual environment call .\venv\Scripts\deactivate.bat

:: Calling external python program to check for local modules :: python .\setup\check_local_modules.py --no_question

:: Activate the virtual environment call .\venv\Scripts\activate.bat ::set PATH=%PATH%;.\venv\Lib\site-packages\torch\lib

python -s main.py --windows-standalone-build --use-pytorch-cross-attention --dont-upcast-attention --disable-xformers --disable-smart-memory --bf16-unet --bf16-vae --normalvram pause

image

ip_workflow.json

cubiq commented 5 months ago

I need the comfyui startup message as well

DarkAlchy commented 5 months ago

This it?

ComfyUI startup time: 2024-04-06 10:54:44.380131 Platform: Windows Python version: 3.10.12 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 19:01:18) [MSC v.1916 64 bit (AMD64)] Python executable: F:\ComfyUI\venv\Scripts\python.exe ** Log path: F:\ComfyUI\comfyui.log ←[34m[Maras IT] ←[92mInitialization←[0m

Prestartup times for custom nodes: 0.0 seconds: F:\ComfyUI\custom_nodes\rgthree-comfy 0.0 seconds: F:\ComfyUI\custom_nodes\ComfyUI-MarasIT-Nodes 0.0 seconds: F:\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 24564 MB, total RAM 49078 MB Set vram state to: NORMAL_VRAM Disabling smart memory management Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync VAE dtype: torch.bfloat16 disabling upcasting of attention Using pytorch cross attention

cubiq commented 5 months ago

I'm tentatively using the unet dtype, I believe I was doing that already in the past and caused some issues. Let's see how it goes.

FOUR260 commented 5 months ago

How to solve this problem? Error occurred when executing KSampler: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. 002 001

DarkAlchy commented 5 months ago

Seems it is my issue but float instead of bfloat. Has to be the same line at cause I showed.

cubiq commented 5 months ago

there are some functions that are not compatible with all dtypes by pytorch. All the dtypes that are compatible should be supported now, otherwise it falls back to float16 or 32. If you still get error you have to --force-fp16 which is the default anyway

DarkAlchy commented 5 months ago

there are some functions that are not compatible with all dtypes by pytorch. All the dtypes that are compatible should be supported now, otherwise it falls back to float16 or 32. If you still get error you have to --force-fp16 which is the default anyway

Were you not able to use .to(xxx) for those?

cubiq commented 5 months ago

in which case it would not still honor the requested dtype though

DarkAlchy commented 5 months ago

it also would not error out either. I know when I programmed I used bfloat16 and for those that did not use bfloat I had to .to(xxx) then when they finished I .to(xxx) back to whatever. for instance numpy refuses so I had to not only stick it on the cpu (I can't wait for numpy to finally gain gpu functionality) but I had to .float it for it as well. Then when it was done I .to(torch.bfloat16) for the rest of my code.

CaliforniaOrange2 commented 5 months ago

image What's the solution to this problem?

yaboosat commented 5 months ago

Dear Matteo, I opened this topic (https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/427) as I had an issue by running Ipadapter and you said it has been answered here but I couldn't find it. If you mean I have to update or reinstall the plugin I have done both but didn't solved.

Would you please help me what's wrong?

poojan-solanki commented 5 months ago

I installed ComfyUI today (15/04/2024) and I think issue still persists. Can anyone please help me in easy way. I am quite new. As per this tutorial https://comfyanonymous.github.io/ComfyUI_examples/unclip/ when I drag the image to try img2img it gives error

!!! Exception during processing !!!
Traceback (most recent call last):
  File "P:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "P:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "P:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "P:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 880, in encode
    output = clip_vision.encode_image(image)
             ^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'encode_image'

Prompt executed in 6.72 seconds

image

Is there a new way to perfofrm img2img?

I haven't performed above steps of IPAdapter because I was unable to understand it.

comtest99 commented 5 months ago
Screenshot 2024-04-19 011432

my IPAdapter advance , weight_type can't show style transfer(SDXL) and composition( SDXL)

yxf551 commented 5 months ago

屏幕截图 2024-04-19 015113

weight_type can't show style transfer(SDXL) and composition( SDXL)

seghier commented 5 months ago

I really like and appreciate your excellent work. Finally it work, i download the new portable version and copy/paste ComfyUI folder