cubiq / ComfyUI_IPAdapter_plus

GNU General Public License v3.0
4.17k stars 314 forks source link

:lady_beetle: Common issues. Please read! #108

Open cubiq opened 12 months ago

cubiq commented 12 months ago

Before posting a new issue, please check the currently opened and closed issues! Very likely the solution is already there!

The most common causes for issues are:

:arrow_forward: Outdated ComfyUI and/or Extension

Always update ComfyUI and the IPAdapter extension to the latest version. If you are on windows you may need to re-download a new portable version or use the update scripts!

After the update always stop ComfyUI and restart it. Then refresh the browser a couple of times to clear the cache. If it doesn't work try to recreate the updated nodes.

All the following errors are caused by an outdated installation:

:arrow_forward: IPAdapter, InstantID, PuLID interoperability

The three technologies are very close together and share some common code. Be sure to upgrade all of them before reporting an issue.

:arrow_forward: Delete the old Deprecated IPAdapter extension

You may have already installed the deprecated IPAdapter_ComfyUI extension. That will conflict with this extension and needs to be removed.

:arrow_forward: Can't find the IPAdapterApply node anymore

The IPAdapter Apply node is now replaced by IPAdapter Advanced. It's a drop in replacement, remove the old one and reconnect the pipelines to the new one.

:arrow_forward: size mismatch for proj_in.weight: copying a param with shape torch.Size([..., ...]) from checkpoint, the shape in current model is torch.Size([..., ...])

Any tensor size mismatch error is caused by the wrong combination of IPAdapter model, image encoder and/or base checkpoint.

All -vit-h models require the SD1.5 image encoder. At the moment only one SDXL model and the vit-G SD1.5 model need the bigger image encoder.

:arrow_forward: Insightface is required for FaceID models

If you use any FaceID model you need either use the simple IPAdapter node or the dedicated IPAdapter FaceID node.

:arrow_forward: Can't find the saved embeddings

The embeddings are saved into output directory and need to be moved into the input directory to be loaded.

:arrow_forward: Mismatched image encoders / Black image / No result at all / 'NoneType' Error

When you download the encoders from huggingface they both have the same name (model.safetensors). Please be sure to rename them correctly (for sd1.5 and sdxl) and use the right one depending on the IPAdapter that you use.

All IPAdapter models use the "SD1.5" image encoder (no matter the target checkpoint) except for one SDXL model and models ending with vit-G.

:arrow_forward: Dtype mismatch

If you get errors like:

Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.

Run ComfyUI with --force-fp16

dustintheweb commented 12 months ago

I'm fully updated and I get 'NoneType' object has no attribute 'encode_image', so that one may not be related to updating... seems to only happen on the sd15 config.

image image
Error occurred when executing IPAdapterApply:

'NoneType' object has no attribute 'encode_image'

File "/Users/dustintheweb/@Projects/Internal/AI/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/Internal/AI/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/Internal/AI/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/Internal/AI/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 408, in apply_ipadapter
clip_embed = clip_vision.encode_image(image)
^^^^^^^^^^^^^^^^^^^^^^^^
image image
cubiq commented 12 months ago

@dustintheweb on what commit number are you on with ComfyUI? I see you are on linux (or Mac?), use git pull to update, not the manager. Then stop comfy and restart. Your installation is not up to date, even if the Manager says it is.

dustintheweb commented 11 months ago

Yes I am MacOS and currently on ComfyUI commit 1754 (which is the latest as of an hour ago) and still get the same error.

image

git pull says up to date:

image

I was on 1751 earlier today, when I first posted, which was the latest for that time.

cubiq commented 11 months ago

@dustintheweb can you please post line 41 of your local file comfy/clip_vision.py

dustintheweb commented 11 months ago

def encode_image(self, image):

image
cubiq commented 11 months ago

have you stopped comfy and restarted?

dustintheweb commented 11 months ago

have you stopped comfy and restarted?

Yep, all the things 🤷

cubiq commented 11 months ago

Yep, all the things 🤷

I don't know if it's a Mac thing, I have no way to check unless you give me SSH access to your PC. The only other thing I can suggest is to start over with a clean installation

binbinuper commented 11 months ago

[Load CLIP Vision ] is wrong The author has already written use this ↓ https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors

cubiq commented 11 months ago

ah right that's another possibility

dustintheweb commented 11 months ago

[Load CLIP Vision ] is wrong The author has already written use this ↓ https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors

downloaded these from huggingface & updated the CLIP vision file as the only change, now have his error at Load Checkpoint:

Error occurred when executing CheckpointLoaderSimple:

'NoneType' object has no attribute 'lower'

File "/Users/dustintheweb/@Projects/AI/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/nodes.py", line 476, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/comfy/sd.py", line 424, in load_checkpoint_guess_config
sd = comfy.utils.load_torch_file(ckpt_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/comfy/utils.py", line 12, in load_torch_file
if ckpt.lower().endswith(".safetensors"):
^^^^^^^^^^
image
cubiq commented 11 months ago

I would try to recreate the environment from scratch

dustintheweb commented 11 months ago

I would try to recreate the environment from scratch

yeah it seems like something broke in general from an update this morning. I will nuke all and rebuild from scratch today / tomorrow and follow up. ty

dustintheweb commented 11 months ago

Nuked / rebuilt my environment and got ipadapter sd15 working. The issue was that I was symlinking checkpoints, vae's and other resources from a common folder instead of using extra_model_paths.yaml. It did not like that for some reason. All is good now, thx again.

image
rslosch commented 11 months ago

Nuked / rebuilt my environment and got ipadapter sd15 working. The issue was that I was symlinking checkpoints, vae's and other resources from a common folder instead of using extra_model_paths.yaml. It did not like that for some reason. All is good now, thx again.

image

Are you running on CPU? I'm unable to run it otherwise on my Mac m2pro without getting RuntimeError: User specified an unsupported autocast device_type 'mps'

cubiq commented 11 months ago

Are you running on CPU? I'm unable to run it otherwise on my Mac m2pro without getting RuntimeError: User specified an unsupported autocast device_type 'mps'

can you try to force fp32?

cubiq commented 11 months ago

mps should be fixed now

Amirox17 commented 11 months ago

Appreciate the support, @cubiq. Despite updating the code, I encountered a snag related to the dtype specification:

torch/amp/autocast_mode.py", line 329, in __enter__
    torch.set_autocast_cpu_dtype(self.fast_dtype)  # type: ignore[arg-type]
RuntimeError: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype

To tackle this, I explicitly set the dtype in the init() to torch.bfloat16, seems to be working since, dunno if it's the correct fix for all tho.

cubiq commented 11 months ago

To tackle this, I explicitly set the dtype in the init() to torch.bfloat16, seems to be working since, dunno if it's the correct fix for all tho.

can you try the latest commit? if it doesn't work please open a new issue

JellyBeanMaster commented 11 months ago

Hi there..for 1 week,I've living this problem in Comfyui colab.What can I do? err

WarrenGonsalves commented 11 months ago

[Load CLIP Vision ] is wrong The author has already written use this ↓ https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors

this fixed the 'NoneType' object has no attribute 'encode_image' issue for me thanks!

chlowden commented 11 months ago

Hello .. Anyone have an idea how to resolve this error please? `Error occurred when executing IPAdapterApply:

Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).

File "/home/admin/ComfyUI/execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/admin/ComfyUI/execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/admin/ComfyUI/execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/admin/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 463, in apply_ipadapter self.ipadapter = IPAdapter( File "/home/admin/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 176, in init self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "/home/admin/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( `

chlowden commented 11 months ago

`Error occurred when executing IPAdapterApply:

Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).`

This afternoon's comfyui update (ComfyUI: 1805340177 Manager: V1.11.1)

plus following the advice above to install into separate 2 folders in the /ComfyUI/models/clipvision of the safetensors below (both files have the same name hence why I created SD & SDXL folders to hold each)

https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors

I can select either model.safetensors in the Load CLIP VISION node ... SDXL version gave the error and the SD version works for my needs.

Thanks to you for all this amazing work. It opens a whole new world to me

UrwLee commented 11 months ago

[Load CLIP Vision ] is wrong The author has already written use this ↓ https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors

downloaded these from huggingface & updated the CLIP vision file as the only change, now have his error at Load Checkpoint:

Error occurred when executing CheckpointLoaderSimple:

'NoneType' object has no attribute 'lower'

File "/Users/dustintheweb/@Projects/AI/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/nodes.py", line 476, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/comfy/sd.py", line 424, in load_checkpoint_guess_config
sd = comfy.utils.load_torch_file(ckpt_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dustintheweb/@Projects/AI/ComfyUI/comfy/utils.py", line 12, in load_torch_file
if ckpt.lower().endswith(".safetensors"):
^^^^^^^^^^
image

just download the missing model in the link https://huggingface.co/webui/ControlNet-modules-safetensors/tree/main

vlsech commented 11 months ago

I am getting this error. I have NVIDIA GeForce GTX 1660 Ti. Start ComfyUI with python.exe -s ComfyUI\main.py --windows-standalone-build --use-split-cross-attention --force-fp32 --lowvram

Error occurred when executing BatchCreativeInterpolation:

Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype

File "C:\Users\vlsec\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vlsec\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vlsec\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vlsec\ComfyUI_windows_portable\ComfyUI\custom_nodes\steerable-motion\SteerableMotion.py", line 343, in combined_function embed, = ipadapter_encoder.preprocess(clip_vision, prepped_image, True, 0.0, 1.0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vlsec\ComfyUI_windows_portable\ComfyUI\custom_nodes\steerable-motion\imports\IPAdapterPlus.py", line 718, in preprocess clip_embed_zeroed = zeroed_hidden_states(clip_vision, image.shape[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vlsec\ComfyUI_windows_portable\ComfyUI\custom_nodes\steerable-motion\imports\IPAdapterPlus.py", line 169, in zeroed_hidden_states with precision_scope(comfy.model_management.get_autocast_device(clip_vision.load_device), torch.float32): File "C:\Users\vlsec\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\amp\autocast_mode.py", line 329, in enter torch.set_autocast_cpu_dtype(self.fast_dtype) # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Close

lesaugustins commented 11 months ago

I run Comfy on a PC, total reinstall yesterday from scratch, Comfy and Ipadapter. Loading models Sd 1.5 plus, model for ClipVision Sd1.5 too, , rename it, but always the same error of conversion between float and dfloat, refresh and reload Comfy, recreate nodes again and again , but always this error :

Error occurred when executing KSampler:

Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.

File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1299, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1269, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 101, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 622, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 561, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 737, in sample_ddpm return generic_step_sampler(model, x, sigmas, extra_args, callback, disable, noise_sampler, DDPMSampler_step) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 726, in generic_step_sampler denoised = model(x, sigmas[i] * s_in, *extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 285, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 275, in forward return self.apply_model(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 252, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 226, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 85, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 854, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 46, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 604, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 431, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 189, in checkpoint return func(inputs) ^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 528, in _forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 310, in call out_ip = optimized_attention(q, ip_k, ip_v, extra_options["n_heads"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\comfy\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 318, in attention_pytorch out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Close Queue size: 0⚙️ Queue Prompt Extra options Queue FrontView QueueView History Save Load Refresh Clipspace Clear Load Default Manager Share 3d, text,cartoon, anime, (deformed eyes, nose, ears, nose), bad anatomy, ugly photo,young 2girls 17 yo,blue eyes,pale skin,pores,three quarter pose,in the style of david lachapelle,vogue cover

cubiq commented 11 months ago

I am getting this error. I have NVIDIA GeForce GTX 1660 Ti. Start ComfyUI with python.exe -s ComfyUI\main.py --windows-standalone-build --use-split-cross-attention --force-fp32 --lowvram

try with --force-fp16 also at @lesaugustins

lesaugustins commented 11 months ago

You are a God ! And you replied in 1 minute! Thanks a lot!

JimPresting commented 11 months ago

Same for me. It has to be some sort of compatibility issue with the IPadapters and the clip_vision but I don't know which one is the right model to download based on the models I have. I located these under clip_vision and the ipadaptermodels under /ipadapter so don't know why it does not work.

Error: Error occurred when executing IPAdapterApply: 'NoneType' object has no attribute 'encode_image'

Really need help with this one. problemmodels

crimpproduction commented 11 months ago

I'm getting this error: I've updated my comfy and update all. but still get this red error message

image

Error occurred when executing IPAdapterApply:

'ClipVisionModel' object has no attribute 'get'

File "C:\AI STABLE DIFFUSION\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI STABLE DIFFUSION\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI STABLE DIFFUSION\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI STABLE DIFFUSION\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 510, in apply_ipadapter clip_embed = clip_vision.get(tensorToCV(image)) # TODO: support multiple images (is it needed?) ^^^^^^^^^^^^^^^

JorgeR81 commented 11 months ago

Is the new FaceID IPAdapter compatible with the ReActor Node install ? https://github.com/Gourieff/comfyui-reactor-node

I already have the ReActor node installed, in a Comfy UI Portable, on Windows. So, I already have:

cubiq commented 11 months ago

@JorgeR81 yes, that should work

hinablue commented 11 months ago

I have the same as @crimpproduction but I used Ubuntu OS. And also update all ComfyUI to the newest version.

JorgeR81 commented 10 months ago

I have Face ID plus working. I'm very happy with the results. But I have an error when I try to load insightface face with CUDA.

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Cui\cu_121_2\ComfyUI_windows_portable\ComfyUI\models\insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
2023-12-30 11:52:43.5667279 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Cui\cu_121_2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

And then it defaults to CPU, right ? Because it ends up working, it just takes more time, like the CPU mode.

But this is to be expected, right ? Since I'm using a portable Comfy UI version with Cuda 12, and onnxruntime-gpu apparently does not support Cuda 12.

https://stackoverflow.com/questions/75727988/trying-to-use-onnxruntime-with-gpu-sessionoptionsappendexecutionprovider-cuda-g

This is mostly curiosity on my part. It's only a minor issue for me.  I just have to wait a bit longer, when using the workflow for the first time.

SupremeGD commented 10 months ago

Hello, how can we solve this: Error occurred when executing ControlNetApply: 'NoneType' object has no attribute 'copy'

File "D:\xunlei\AI\sd-file\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)

File "D:\xunlei\AI\sd-file\ComfyUI\execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

File "D:\xunlei\AI\sd-file\ComfyUI\execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

File "D:\xunlei\AI\sd-file\ComfyUI\nodes.py", line 706, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength)

Johnz86 commented 10 months ago

For those that struggle with basic setup of workflow and still get error: size mismatch for proj_in.weight: copying a param with shape torch.Size([..., ...])

Be sure to download the correct clip vision. Without it will not work. For SD1.5 models you need models/image_encoder: OpenCLIP-ViT-H-14 with 632.08M parameter. Download only the safetensors files.

The direct download from Comfyui did not work for me: image These create SD1,5/pytorch_model.bin files. image These did not work at all, even if the links from Comfyui points to correct related IPADAPTER page.

gddjag commented 10 months ago

终于解决了,跑通了 问题是我把模型放错位置了 分别把这个SD1.5 https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors 这个SDXL https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors 放到ComfyUI/models/clip_vision即可,两个文件名字一样,新建两个文件夹就行了 SD15和SDXL

vggfy commented 10 months ago

Error occurred when executing InsightFaceLoader: How to solve this problem How to solve this problem: wettt

No module named 'onnx.onnx_cpp2py_export.defs'; 'onnx.onnx_cpp2py_export' is not a package

File "E:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "E:\ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 529, in load_insight_face raise Exception(e)

MikeFFWL commented 10 months ago

Hello everyone, maybe someone can help me. Unfortunately I don't know what to do next... thank you very much.

Screenshot 2024-01-11 124850 Error occurred when executing InsightFaceLoader:

No module named 'insightface.app'

File "C:\Users\User\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 529, in load_insight_face raise Exception(e)

bihailantian655 commented 10 months ago

Error occurred when executing InsightFaceLoader: How to solve this problem How to solve this problem: wettt

No module named 'onnx.onnx_cpp2py_export.defs'; 'onnx.onnx_cpp2py_export' is not a package

File "E:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "E:\ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 529, in load_insight_face raise Exception(e)

解决了没 how to solve this

748686 commented 10 months ago

运行Windows 版 comfyui insightface 时报错,请高手帮忙看看 ERROR:root:Traceback (most recent call last): File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\nodes.py", line 1355, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\nodes.py", line 1325, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(args, kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\sample.py", line 100, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 716, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 622, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 561, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 285, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 275, in forward return self.apply_model(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 272, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 252, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\samplers.py", line 226, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\model_base.py", line 86, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\custom_nodes\SeargeSDXL\modules\custom_sdxl_ksampler.py", line 70, in new_unet_forward x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 850, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 46, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\ldm\modules\attention.py", line 616, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\ldm\modules\attention.py", line 443, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 189, in checkpoint return func(*inputs) ^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\ldm\modules\attention.py", line 540, in _forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 435, in call out_ip = optimized_attention(q, ip_k, ip_v, extra_options["n_heads"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program Files (x86)\ComfyUI_windows\ComfyUI\comfy\ldm\modules\attention.py", line 328, in attention_pytorch out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.

Prompt executed in 228.42 seconds

748686 commented 10 months ago

每次都是在k采样器这里报错

gooojt commented 10 months ago

754M3A5BOO%Z$}_5{94@9KE

CXTCodes commented 10 months ago

image help me

CXTCodes commented 10 months ago

ComfyUI Launcher Diagnostic File

Date: 2024-01-20 04:00:57
Launcher Version: 2.7.11.277
Data File Version: 2024-01-16 17:02
ComfyUI Version: d76a04b6ea61306349861a7c4657567507385947 (2024-01-18 08:37:19)
Working Directory: F:\SD\ComfyUI-aki-v1
------------------------
System Information: 
OS: Microsoft Windows NT 10.0.22621.0
CPU: 12 cores
Memory Size: 32768 MB
Page File Size: 11427 MB

NVIDIA Management Library:
  NVIDIA Driver Version: 546.33
  NVIDIA Management Library Version: 12.546.33

CUDA Driver:
  Version: 12030
  Devices: 
    00000000:01:00.0 0: NVIDIA GeForce GTX 1080 Ti [61] 11 GB

NvApi:
  Version: 54633 r545_00

DirectML Driver: 
  Devices: 
    6918 0: NVIDIA GeForce GTX 1080 Ti 10 GB
    6918 1: NVIDIA GeForce GTX 1080 Ti 10 GB

Intel Level Zero Driver:
  Not Available

------------------------
Environment Variables: 
COMPUTERNAME=财神
SystemRoot=C:\WINDOWS
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
DriverData=C:\Windows\System32\Drivers\DriverData
CUDA_PATH_V12_3=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3
CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3
ProgramFiles(x86)=C:\Program Files (x86)
ProgramData=C:\ProgramData
TMP=C:\Users\ADMINI~1\AppData\Local\Temp
ComSpec=C:\WINDOWS\system32\cmd.exe
SystemDrive=C:
USERDOMAIN_ROAMINGPROFILE=财神
USERDOMAIN=财神
CommonProgramW6432=C:\Program Files\Common Files
windir=C:\WINDOWS
ALLUSERSPROFILE=C:\ProgramData
HOMEDRIVE=C:
USERPROFILE=C:\Users\Administrator
CommonProgramFiles=C:\Program Files\Common Files
OS=Windows_NT
SESSIONNAME=Console
USERNAME=Administrator
PROCESSOR_ARCHITECTURE=AMD64
LOCALAPPDATA=C:\Users\Administrator\AppData\Local
OneDrive=C:\Users\Administrator\OneDrive
ZES_ENABLE_SYSMAN=1
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
NUMBER_OF_PROCESSORS=12
PROCESSOR_REVISION=9e0a
ProgramFiles=C:\Program Files
ProgramW6432=C:\Program Files
HOMEPATH=\Users\Administrator
EFC_5692=1
VS140COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\
PROCESSOR_LEVEL=6
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
APPDATA=C:\Users\Administrator\AppData\Roaming
PUBLIC=C:\Users\Public
TEMP=C:\Users\ADMINI~1\AppData\Local\Temp
Path=C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Program Files\Bandizip\;C:\Program Files\dotnet\;D:\Git\cmd;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\include;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\lib;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\libnvvp;D:\python\Scripts\;D:\python\;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;
PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules
LOGONSERVER=\\财神
------------------------
Paths: 
Python: F:\SD\ComfyUI-aki-v1\.ext\python.exe
Git: F:\SD\ComfyUI-aki-v1\git\cmd\git.exe
Cmd: C:\WINDOWS\system32\cmd.exe
Cache Path: F:\SD\ComfyUI-aki-v1\.cache
------------------------
Config: 
Audience Type: 新手
Engine: CUDA GPU 0: NVIDIA GeForce GTX 1080 Ti (11 GB) [0]
VRAM Optimization: Auto [Auto]
Port: 8188 [8188]
XAttn Optimization: Xformers [Xformers]
Upcast Attention: True [True]
Precision: Auto [Auto]
Text Encoder Precision: Auto [Auto]
UNet Precision: Auto [Auto]
VAE Precision: Auto [Auto]
Preview Method: Auto [Auto]
Smart Memory: True [True]
Deterministic: False [False]
Listen: False [False]
Server Name:  []
HF Offline Mode: False [False]
Cuda Allocator Backend: Native [Native]
Prevent Sysmem Fallback: True
Extra Args: 
------------------------
Network Preferences: 
Proxy Address: 
Proxy Git: False
Proxy Pip: False
Proxy Model Download: False
Proxy Env: False
Mirror Pypi: True
Mirror Git: True
Mirror ExtensionList: True
Mirror Huggingface: True
Github Acceleration: False
------------------------
config.json: 
Could not find file 'F:\SD\ComfyUI-aki-v1\config.json'.
------------------------
ui-config.json: 
Could not find file 'F:\SD\ComfyUI-aki-v1\ui-config.json'.
------------------------
Log: 
** ComfyUI startup time: 2024-01-20 03:58:29.486510
** Platform:
 Windows
** Python version:
 3.11.6 | packaged by conda-forge | (main, Oct  3 2023, 10:29:11) [MSC v.1935 64 bit (AMD64)]
** Python executable: F:\SD\ComfyUI-aki-v1\.ext\python.exe

** Log path: F:\SD\ComfyUI-aki-v1\comfyui.log

#######################################################################
[ComfyUI-Manager] Starting dependency installation/(de)activation for the extension

## ComfyUI-Manager: EXECUTE => ['F:\\SD\\ComfyUI-aki-v1\\.ext\\python.exe', '-m', 'pip', 'install', '-U', '4.1.1']

## Execute install/(de)activation script for '.'
 Looking in indexes: https://mirror.baidu.com/pypi/simple
 Looking in links: https://mirror.sjtu.edu.cn/pytorch-wheels/torch_stable.html, https://mirrors.aliyun.com/pytorch-wheels/torch_stable.html
[!] 
ERROR: Could not find a version that satisfies the requirement 4.1.1 (from versions: none)
[!] 
ERROR: No matching distribution found for 4.1.1
install/(de)activation script failed: .

[ComfyUI-Manager] Startup script completed.
#######################################################################

[ComfyUI-Manager] Windows event loop policy mode enabled

Prestartup times for custom nodes:
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Marigold
   5.1 seconds: F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Manager

Total VRAM 11264 MB, total RAM 32680 MB
xformers version: 0.0.23

Set vram state to: NORMAL_VRAM
Device: 
cuda:0 NVIDIA GeForce GTX 1080 Ti : native
VAE dtype:
 torch.float32
Using xformers cross attention
Adding extra search path checkpoints F:/SD/sd-webui-aki-v4.1/models/Stable-diffusion

Adding extra search path 
configs F:/SD/sd-webui-aki-v4.1/models/Stable-diffusion
Adding extra search path vae 
F:/SD/sd-webui-aki-v4.1/models/VAE
Adding extra search path
 loras 
F:/SD/sd-webui-aki-v4.1/models/Lora
Adding extra search path 
loras F:/SD/sd-webui-aki-v4.1/models/LyCORIS
Adding extra search path upscale_models F:/SD/sd-webui-aki-v4.1/models/ESRGAN
Adding extra search path 
upscale_models F:/SD/sd-webui-aki-v4.1/models/RealESRGAN

Adding extra search path 
upscale_models F:/SD/sd-webui-aki-v4.1/models/SwinIR

Adding extra search path embeddings
 F:/SD/sd-webui-aki-v4.1/embeddings

Adding extra search path hypernetworks

F:/SD/sd-webui-aki-v4.1/models/hypernetworks
Adding extra search path
 controlnet F:/SD/sd-webui-aki-v4.1/models/ControlNet

[AnimateDiff] - WARNING - xformers is enabled but it has a bug that can cause issue while using with AnimateDiff.
### Loading: ComfyUI-Impact-Pack (V4.66)
### Loading: ComfyUI-Impact-Pack (Subpack: V0.4)
### Loading: ComfyUI-Inspire-Pack (V0.59)
[Impact Pack] Wildcards loading done.
### Loading: ComfyUI-Manager (V2.2.3)
### ComfyUI Revision: 1917 [d76a04b6] | Released on '2024-01-17'
FETCH DATA from: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/custom-node-list.json
FETCH DATA from: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/extension-node-map.json
FETCH DATA from: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/model-list.json

FETCH DATA from: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/alter-list.json
Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\nodes.py", line 1872, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Marigold\__init__.py", line 1, in <module>
    from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Marigold\nodes.py", line 5, in <module>
    from .marigold.model.marigold_pipeline import MarigoldPipeline
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Marigold\marigold\model\marigold_pipeline.py", line 9, in <module>
    from diffusers import (
ModuleNotFoundError: No module named 'diffusers'

Cannot import F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Marigold module for custom nodes: No module named 'diffusers'

(pysssss:WD14Tagger) [ERROR] onnxruntime is required, please check requirements are installed.
Skip F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-WD14-Tagger module for custom nodes due to the lack of NODE_CLASS_MAPPINGS.

### Loading: Workspace Manager (V1.0.0)
------------------------------------------
Comfyroll Studio v1.62 :  160 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------

[comfyui_controlnet_aux] | INFO -> Using ckpts path: F:\SD\ComfyUI-aki-v1\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
[ComfyUI-Manager] default cache updated: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/alter-list.json
F:\SD\ComfyUI-aki-v1\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
[ComfyUI-Manager] default cache updated: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/custom-node-list.json
### [START] ComfyUI AlekPet Nodes ###
Node -> ArgosTranslateNode [Loading]
[ComfyUI-Manager] default cache updated: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/model-list.json
[ComfyUI-Manager] default cache updated: https://gitcode.net/ranting8323/ComfyUI-Manager/-/raw/main/extension-node-map.json
Node -> ExtrasNode [Loading]
Node -> PainterNode [Loading]
Node -> PoseNode [Loading]
Node -> TranslateNode [Loading]
### [END] ComfyUI AlekPet Nodes ###
Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\nodes.py", line 1872, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_FizzNodes\__init__.py", line 57, in <module>
    from .ScheduledNodes import (
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 4, in <module>
    import numexpr
ModuleNotFoundError: No module named 'numexpr'

Cannot import F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_FizzNodes module for custom nodes:
 No module named 'numexpr'
Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\nodes.py", line 1872, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 936, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1073, in get_code
  File "<frozen importlib._bootstrap_external>", line 1130, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'F:\\SD\\ComfyUI-aki-v1\\custom_nodes\\ComfyUI_roop\\__init__.py'

Cannot import F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_roop module for custom nodes: [Errno 2] No such file or directory: 'F:\\SD\\ComfyUI-aki-v1\\custom_nodes\\ComfyUI_roop\\__init__.py'

Efficiency Nodes: Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...Success!
Efficiency Nodes Warning: Failed to import python package 'simpleeval'; related nodes disabled.

Patching UNetModel.forward
UNetModel.forward has been successfully patched.
Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\insightface\__init__.py", line 8, in <module>
    import onnxruntime
ModuleNotFoundError: No module named 'onnxruntime'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\nodes.py", line 1872, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\insightface\__init__.py", line 10, in <module>
    raise ImportError(
ImportError: Unable to import dependency onnxruntime. 

Cannot import F:\SD\ComfyUI-aki-v1\custom_nodes\insightface module for custom nodes: Unable to import dependency onnxruntime. 

Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\nodes.py", line 1872, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 936, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1073, in get_code
  File "<frozen importlib._bootstrap_external>", line 1130, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'F:\\SD\\ComfyUI-aki-v1\\custom_nodes\\insightface-master\\__init__.py'

Cannot import F:\SD\ComfyUI-aki-v1\custom_nodes\insightface-master module for custom nodes:
 [Errno 2] No such file or directory: 'F:\\SD\\ComfyUI-aki-v1\\custom_nodes\\insightface-master\\__init__.py'
[Power Noise Suite]: 🦚🦚🦚 kweh.. 🦚🦚🦚
[Power Noise Suite]: Tamed 11 wild nodes.
Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\nodes.py", line 1872, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\was-node-suite-comfyui\__init__.py", line 1, in <module>
    from .WAS_Node_Suite import NODE_CLASS_MAPPINGS
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 37, in <module>
    from numba import jit
ModuleNotFoundError: No module named 'numba'

Cannot import F:\SD\ComfyUI-aki-v1\custom_nodes\was-node-suite-comfyui module for custom nodes: No module named 'numba'

Import times for custom nodes:
   0.0 seconds (IMPORT FAILED):
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_roop
   0.0 seconds (IMPORT FAILED): F:\SD\ComfyUI-aki-v1\custom_nodes\insightface-master

   0.0 seconds: 
F:\SD\ComfyUI-aki-v1\custom_nodes\ControlNet-LLLite-ComfyUI
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\FreeU_Advanced
   0.0 seconds: F:\SD\ComfyUI-aki-v1\custom_nodes\comfyui-workspace-manager

   0.0 seconds: 
F:\SD\ComfyUI-aki-v1\custom_nodes\AIGODLIKE-ComfyUI-Translation
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_IPAdapter_plus-main
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_TiledKSampler
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_IPAdapter_plus
   0.0 seconds (IMPORT FAILED):
 F:\SD\ComfyUI-aki-v1\custom_nodes\insightface
   0.0 seconds (IMPORT FAILED): 
F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-WD14-Tagger
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\PowerNoiseSuite
   0.0 seconds: F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_experiments

   0.0 seconds (IMPORT FAILED): F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_FizzNodes
   0.0 seconds: F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Custom-Scripts

   0.0 seconds: 
F:\SD\ComfyUI-aki-v1\custom_nodes\comfyui-animatediff
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Advanced-ControlNet
   0.0 seconds: F:\SD\ComfyUI-aki-v1\custom_nodes\images-grid-comfy-plugin

   0.0 seconds (IMPORT FAILED): 
F:\SD\ComfyUI-aki-v1\custom_nodes\was-node-suite-comfyui
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_UltimateSDUpscale
   0.0 seconds (IMPORT FAILED): F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Marigold

   0.0 seconds: 
F:\SD\ComfyUI-aki-v1\custom_nodes\efficiency-nodes-comfyui
   0.0 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\Derfuu_ComfyUI_ModdedNodes
   0.0 seconds: F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_Comfyroll_CustomNodes

   0.1 seconds: 
F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Inspire-Pack
   0.5 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\comfyui_controlnet_aux
   0.6 seconds: F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Manager

   1.3 seconds: 
F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Impact-Pack
   6.1 seconds:
 F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_Custom_Nodes_AlekPet

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI-Manager\extension-node-map.json

got prompt
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_IPAdapter_plus-main\IPAdapterPlus.py", line 535, in load_insight_face
    from insightface.app import FaceAnalysis
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\insightface\app\__init__.py", line 1, in <module>
    from .face_analysis import *
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\insightface\app\face_analysis.py", line 14, in <module>
    import onnxruntime
ModuleNotFoundError: No module named 'onnxruntime'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "F:\SD\ComfyUI-aki-v1\execution.py", line 155, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\SD\ComfyUI-aki-v1\execution.py", line 85, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\SD\ComfyUI-aki-v1\execution.py", line 78, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_IPAdapter_plus-main\IPAdapterPlus.py", line 537, in load_insight_face
    raise Exception(e)
Exception: No module named 'onnxruntime'

Prompt executed in 0.76 seconds
------------------------
Fault Traceback: 
Not Available
CXTCodes commented 10 months ago

有哪位大神能帮帮忙吗?快崩溃了,用的秋叶的整合包 Error occurred when executing IPAdapterApply:

'NoneType' object has no attribute 'encode_image'

File "F:\SD\ComfyUI-aki-v1\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\SD\ComfyUI-aki-v1\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\SD\ComfyUI-aki-v1\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 636, in apply_ipadapter clip_embed = clip_vision.encode_image(image) ^^^^^^^^^^^^^^^^^^^^^^^^ image image image

SupremeGD commented 9 months ago

加载图片节点是不是没连上

------------------ 原始邮件 ------------------ 发件人: "cubiq/ComfyUI_IPAdapter_plus" @.>; 发送时间: 2024年1月22日(星期一) 中午11:40 @.>; @.**@.>; 主题: Re: [cubiq/ComfyUI_IPAdapter_plus] :lady_beetle: Common issues. Please read! (Issue #108)

有哪位大神能帮帮忙吗?快崩溃了,用的秋叶的整合包 Error occurred when executing IPAdapterApply:

'NoneType' object has no attribute 'encode_image'

File "F:\SD\ComfyUI-aki-v1\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\SD\ComfyUI-aki-v1\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\SD\ComfyUI-aki-v1\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\SD\ComfyUI-aki-v1\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 636, in apply_ipadapter clip_embed = clip_vision.encode_image(image) ^^^^^^^^^^^^^^^^^^^^^^^^ image.png (view on web) image.png (view on web) image.png (view on web)

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

renee1983 commented 9 months ago

ayuda por favor, no soy experto en python

Captura de pantalla 2024-02-23 235408

handles98 commented 9 months ago

Hello .. Anyone have an idea how to resolve this error please? `Error occurred when executing IPAdapterApply:

Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).

File "/home/admin/ComfyUI/execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/admin/ComfyUI/execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/admin/ComfyUI/execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/home/admin/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 463, in apply_ipadapter self.ipadapter = IPAdapter( File "/home/admin/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 176, in init** self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "/home/admin/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( `

Carefully look at the table, only Vit BigG models requires a large image encoder.

screenshot-1708913542876

ziyuebuke commented 8 months ago

problem of size mismatch for proj_in.weight。 I can't use ip-adapter-faceid-plusv2_sdxl. I've tried all the clip vision models, but it still doesn't work. However, I can use FACEID V2 SD15 normally. What should I do? Please help.