ltdrdata / ComfyUI-Impact-Pack

Custom nodes pack for ComfyUI This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more.
GNU General Public License v3.0
1.6k stars 153 forks source link

Error FaceDetailerPipe: Could not run torchvision::nms #405

Closed brbbbq closed 7 months ago

brbbbq commented 7 months ago

Trying to run a basic face detailer workflow, but getting a torchvision error. Here's the workflow: 231228-facedetailer_test-01

And here's the error:

Error occurred when executing FaceDetailerPipe:

Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

CPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:153 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback]
AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback]
AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback]
AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback]
AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback]
AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback]
AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback]
AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback]
AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:296 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:382 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:249 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:710 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:161 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:165 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:157 [backend fallback]

File "C:\Apps\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 1196, in doit
enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 423, in enhance_face
segs = bbox_detector.detect(image, bbox_threshold, bbox_dilation, bbox_crop_factor, drop_size, detailer_hook=detailer_hook)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack\impact\subcore.py", line 106, in detect
detected_results = inference_bbox(self.bbox_model, core.tensor2pil(image), threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack\impact\subcore.py", line 33, in inference_bbox
pred = model(image, conf=confidence, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\engine\model.py", line 98, in __call__
return self.predict(source, stream, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\engine\model.py", line 257, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\engine\predictor.py", line 198, in __call__
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 35, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\engine\predictor.py", line 272, in stream_inference
self.results = self.postprocess(preds, im, im0s)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\models\yolo\detect\predict.py", line 25, in postprocess
preds = ops.non_max_suppression(preds,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\utils\ops.py", line 238, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\_ops.py", line 692, in __call__
return self._op(*args, **kwargs or {})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I checked my pytorch version, and it's 0.14.0+cpu. Did a bunch of searching online, but not sure what to try next.

ltdrdata commented 7 months ago

image

From what I can see, it seems like you are looking at the wrong version.

brbbbq commented 7 months ago

I tried updating pytorch by running pip install --upgrade torch torchvision torchsde in Git Bash. It seemed to work, 231228-202527-python

But I'm still getting the same error.

I updated ComfyUI only a couple days ago, and have the latest version of the Impact nodes. 231228-202703-chrome

Is there an easier way of displaying all the torch versions other than import & print?

ltdrdata commented 7 months ago

I tried updating pytorch by running pip install --upgrade torch torchvision torchsde in Git Bash. It seemed to work, 231228-202527-python

But I'm still getting the same error.

I updated ComfyUI only a couple days ago, and have the latest version of the Impact nodes. 231228-202703-chrome

Is there an easier way of displaying all the torch versions other than import & print?

C:\Apps\ComfyUI_windows_portable\python_embeded\python.exe -m pip freeze

brbbbq commented 7 months ago

Ok, this is what it spit out from the Git Bash:

$ python.exe -m pip freeze
absl-py==1.3.0
accelerate==0.12.0
addict==2.4.0
ai-tools==0.3.9
aiohttp==3.8.3
aiosignal==1.2.0
antlr4-python3-runtime==4.9.3
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
asttokens==2.1.0
async-timeout==4.0.2
attrs==22.1.0
Babel==2.11.0
backcall==0.2.0
basicsr==1.4.2
bcrypt==4.0.1
beautifulsoup4==4.11.1
bitsandbytes==0.35.4
bleach==5.0.1
braceexpand==0.1.7
cachetools==5.2.0
certifi==2022.9.24
cffi==1.15.1
chardet==4.0.0
charset-normalizer==2.1.1
clean-fid==0.1.34
click==8.1.3
-e git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1#egg=clip
clip-anytorch==2.5.0
cognitive-face==1.5.0
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.0.6
cryptography==38.0.3
cycler==0.11.0
DateTime==4.7
debugpy==1.6.3
decorator==5.1.1
defusedxml==0.7.1
diffusers @ git+https://github.com/ShivamShrirao/diffusers@3cb0dbe8e407cffbba242ed3412e8359f34d9a33
docker-pycreds==0.4.0
einops==0.5.0
entrypoints==0.4
executing==1.2.0
facexlib==0.2.5
fairscale==0.4.4
fastapi==0.86.0
fastjsonschema==2.16.2
ffmpy==0.3.0
filelock==3.8.0
filterpy==1.4.5
flatbuffers==22.10.26
font-roboto==0.0.1
fonts==0.0.3
fonttools==4.38.0
frozenlist==1.3.1
fsspec==2022.10.0
ftfy==6.1.1
future==0.18.2
gfpgan==1.3.8
gitdb==4.0.9
GitPython==3.1.29
google-auth==2.14.0
google-auth-oauthlib==0.4.6
gradio==3.9
grpcio==1.50.0
h11==0.12.0
httpcore==0.15.0
httpx==0.23.0
huggingface-hub==0.10.1
humanfriendly==10.0
idna==2.10
imageio==2.22.4
importlib-metadata==5.0.0
inflection==0.5.1
invisible-watermark==0.1.5
ipykernel==6.17.0
ipython==8.6.0
ipython-genutils==0.2.0
ipywidgets==7.7.1
jedi==0.18.1
Jinja2==3.1.2
json5==0.9.10
jsonmerge==1.9.0
jsonschema==4.17.0
jupyter-http-over-ws==0.0.8
jupyter-server==1.21.0
jupyter_client==7.4.4
jupyter_core==4.11.2
jupyterlab==3.5.0
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.3
jupyterlab_server==2.16.2
-e git+https://github.com/crowsonkb/k-diffusion/@60e5042ca0da89c14d1dd59d73883280f8fce991#egg=k_diffusion
keras==2.10.0
kiwisolver==1.4.4
kornia==0.5.0
lark==1.1.4
-e git+https://github.com/Doggettx/stable-diffusion@ab0bff6bc08c0ac55e08c596c999e5e5e0a7c111#egg=latent_diffusion
linkify-it-py==1.0.3
llvmlite==0.39.1
lmdb==1.3.0
lpips==0.1.4
Markdown==3.4.1
markdown-it-py==2.1.0
MarkupSafe==2.1.1
matplotlib==3.6.2
matplotlib-inline==0.1.6
mdit-py-plugins==0.3.1
mdurl==0.1.2
mistune==2.0.4
modelcards==0.1.6
mpmath==1.2.1
multidict==6.0.2
natsort==8.2.0
nbclassic==0.4.8
nbclient==0.7.0
nbconvert==7.2.3
nbformat==5.7.0
nest-asyncio==1.5.6
networkx==2.8.8
notebook==6.5.2
notebook_shim==0.2.2
numba==0.56.4
numpy==1.23.4
oauthlib==3.2.2
omegaconf==2.2.3
onnx==1.12.0
onnxruntime==1.13.1
open-clip-torch==2.4.1
opencv-python==4.5.5.64
opencv-python-headless==4.8.1.78
orjson==3.8.1
packaging==21.3
pandas==1.5.1
pandocfilters==1.5.0
paramiko==2.11.0
parso==0.8.3
pathtools==0.1.2
pickleshare==0.7.5
piexif==1.1.3
Pillow==9.0.0
prometheus-client==0.15.0
promise==2.3
prompt-toolkit==3.0.32
protobuf==3.19.6
psutil==5.9.3
pure-eval==0.2.2
py-cpuinfo==9.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
pycryptodome==3.15.0
pydantic==1.10.2
pyDeprecate==0.3.2
pydub==0.25.1
Pygments==2.13.0
PyNaCl==1.5.0
pyparsing==3.0.9
pyreadline3==3.4.1
pyrsistent==0.19.2
python-dateutil==2.8.2
python-multipart==0.0.5
pytorch-lightning==1.7.7
pytz==2022.6
PyWavelets==1.4.1
pywin32==304
pywinpty==2.0.9
PyYAML==6.0
pyzmq==24.0.1
realesrgan==0.3.0
regex==2022.10.31
requests==2.25.1
requests-oauthlib==1.3.1
resize-right==0.0.2
rfc3986==1.5.0
rsa==4.9
scikit-image==0.19.3
scipy==1.9.3
seaborn==0.13.0
segment-anything==1.0
Send2Trash==1.8.0
sentry-sdk==1.10.1
setproctitle==1.3.2
shortuuid==1.0.10
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.3.2.post1
stack-data==0.6.0
starlette==0.20.4
sympy==1.11.1
-e git+https://github.com/CompVis/taming-transformers.git@3ba01b241669f5ade541ce990f7650a3b8f65318#egg=taming_transformers
tb-nightly==2.11.0a20221109
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
terminado==0.17.0
thop==0.1.1.post2209072238
tifffile==2022.10.10
timm==0.4.12
tinycss2==1.2.1
tokenizers==0.12.1
tomli==2.0.1
torch==2.1.2
torchdiffeq==0.2.3
torchmetrics==0.10.2
torchsde==0.2.6
torchvision==0.16.2
tornado==6.2
tqdm==4.64.1
traitlets==5.5.0
trampoline==0.1.2
transformers==4.19.2
typing_extensions==4.4.0
uc-micro-py==1.0.1
ultralytics==8.0.231
urllib3==1.26.12
uvicorn==0.19.0
wandb==0.13.5
wcwidth==0.2.5
webdataset==0.2.30
webencodings==0.5.1
websocket-client==1.4.2
websockets==10.4
Werkzeug==2.2.2
wget==3.2
widgetsnbextension==3.6.1
yapf==0.32.0
yarl==1.8.1
zipp==3.10.0
zope.interface==5.5.1
zprint==0.0.11

I noticed this warning when loading ComfyUI

[!] WARNING: Skipping cupy-wheel as it is not installed.
[!] WARNING: Skipping cupy-cuda102 as it is not installed.
[!] WARNING: Skipping cupy-cuda110 as it is not installed.
[!] WARNING: Skipping cupy-cuda111 as it is not installed.
 Found existing installation: cupy-cuda11x 12.3.0
 Uninstalling cupy-cuda11x-12.3.0:
   Successfully uninstalled cupy-cuda11x-12.3.0
[!] WARNING: Skipping cupy-cuda12x as it is not installed.
 Collecting cupy-cuda11x
   Using cached cupy_cuda11x-12.3.0-cp311-cp311-win_amd64.whl.metadata (2.7 kB)
 Requirement already satisfied: numpy<1.29,>=1.20 in c:\apps\comfyui_windows_portable\python_embeded\lib\site-packages (from cupy-cuda11x) (1.26.1)
 Requirement already satisfied: fastrlock>=0.5 in c:\apps\comfyui_windows_portable\python_embeded\lib\site-packages (from cupy-cuda11x) (0.8.2)
 Using cached cupy_cuda11x-12.3.0-cp311-cp311-win_amd64.whl (70.0 MB)
 Installing collected packages: cupy-cuda11x
 Successfully installed cupy-cuda11x-12.3.0

I'm still getting the same error. I checked my CUDA version:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

Do I need to update?

ltdrdata commented 7 months ago

Try uninstall torch, torchvision, .... and reinstall. It seems they aren't compatible with yours.

brbbbq commented 7 months ago

I updated CUDA for the hell of it, but it did not solve the issue: 231229-120226-cmd

I then uninstalled torch, torchvision, and torchsde with these commands in Git Bash:

pip uninstall torch
pip uninstall torchvision
pip uninstall torchsde

And then reinstalled them:

pip install torch
pip install torchvision
pip install torchsde

These are the current versions:

torch==2.1.2
torchsde==0.2.6
torchvision==0.16.2

Still getting the same error :/ 231229-120641-chrome

ltdrdata commented 7 months ago

No.. Not that install...

Get install command from here. https://pytorch.org/

brbbbq commented 7 months ago

From the page https://pytorch.org/get-started/locally/ it gave me this command: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 and I ran it in Github Bash. 231229-160500-mintty-1

I noticed there was an error with my opencv-python version being incompatible with ultralytics. I tried updating opencv-python with this command pip3 install --upgrade opencv-python which seemed successful. 231229-161306-mintty

I then uninstalled and installed ultralytics using the commands:

pip uninstall ultralytics
pip install ultralytics

231229-162000-mintty

I'm still getting the same exact error in ComfyUI from when I started, I even doublechecked the error with a text comparison.

ltdrdata commented 7 months ago

https://github.com/ultralytics/ultralytics/issues/5059

For some reason, there seems to be a mismatch between the versions of your Torch-related packages. Also, I just noticed that you are running the portable version in git bash. There have been reported cases of malfunctions in environments like Mingw in the past.

To minimize side effects, it is recommended to run it from the cmd, and due to the unreliable current installation status, it is suggested to perform a clean install of the portable version and test it with only the Impact Pack installed.

brbbbq commented 7 months ago

Ok, so I did a reinstall of ComfyUI Portable, and installed just the Impact pack, and it's working now!

If I understand correctly, you are suggesting I run the pip update commands in Window's default cmd window instead of the Git Bash window?

I'm going to go ahead and incrementally install my previous install's custom nodes and check for compatibility. I really appreciate you taking the time to address my issues. I'm an artist/3D animator, so there's a lot of fundamental knowledge of python environments that I'm not aware of, and I'm grateful for your patience.

nlowell11 commented 5 months ago

I ran into this while setting up a new Comfyui environment (I'm using anaconda to manage envs). I noticed that torchvision.__version__ was returning a CPU version, even though I had installed torch dependencies (including torchvision) using the ComfyUI's repo's recommed pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121 command. Strangely enough, running pip uninstall torchvision and pip install torchvision --extra-index-url https://download.pytorch.org/whl/cu121 resolved the issue for me. HTH

offmybach commented 3 months ago

Does this conflict still exist? Seems like plenty of time to get things worked out. I'm getting the error now and have both installed.

ltdrdata commented 3 months ago

Does this conflict still exist? Seems like plenty of time to get things worked out. I'm getting the error now and have both installed.

This is a user configuration issue. It occurs whenever the user installs packages incorrectly during the initial installation of ComfyUI.