ltdrdata / ComfyUI-Impact-Subpack

This extension serves as a complement to the Impact Pack, offering features that are not deemed suitable for inclusion by default in the ComfyUI Impact Pack
GNU Affero General Public License v3.0
55 stars 16 forks source link

Problem With CUDA #1

Closed amortegui84 closed 1 year ago

amortegui84 commented 1 year ago

Hello, I tried, use with the cuda version 11 and 12, I also installed the components and updated the paths, uninstalled and installed everything again, but when everything is working, this appears when trying to useit.

Error occurred when executing BboxDetectorSEGS:

Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback] AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback] AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback] AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback] AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback] AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback] AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback] AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback] AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback] AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback] AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback] AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

File "C:\ai\ComfyUI\ComFyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\ai\ComfyUI\ComFyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\ai\ComfyUI\ComFyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "C:\ai\ComfyUI\ComFyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\detectors.py", line 83, in doit segs = bbox_detector.detect(image, threshold, dilation, crop_factor, drop_size) File "C:\ai\ComfyUI\ComFyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\subpack\impact\subcore.py", line 93, in detect detected_results = inference_bbox(self.bbox_model, core.tensor2pil(image), threshold) File "C:\ai\ComfyUI\ComFyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\subpack\impact\subcore.py", line 27, in inference_bbox pred = model(image, conf=confidence, device=device) File "C:\Python310\lib\site-packages\ultralytics\engine\model.py", line 98, in call return self.predict(source, stream, kwargs) File "C:\Python310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Python310\lib\site-packages\ultralytics\engine\model.py", line 246, in predict return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream) File "C:\Python310\lib\site-packages\ultralytics\engine\predictor.py", line 197, in call return list(self.stream_inference(source, model, *args, *kwargs)) # merge list of Result into one File "C:\Python310\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context response = gen.send(None) File "C:\Python310\lib\site-packages\ultralytics\engine\predictor.py", line 257, in stream_inference self.results = self.postprocess(preds, im, im0s) File "C:\Python310\lib\site-packages\ultralytics\models\yolo\segment\predict.py", line 18, in postprocess p = ops.non_max_suppression(preds[0], File "C:\Python310\lib\site-packages\ultralytics\utils\ops.py", line 265, in non_max_suppression i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS File "C:\Python310\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms return torch.ops.torchvision.nms(boxes, scores, iou_threshold) File "C:\Python310\lib\site-packages\torch_ops.py", line 502, in call return self._op(args, kwargs or {})

ltdrdata commented 1 year ago

Have you tried update torch version?

amortegui84 commented 1 year ago

Hey, thanks for answer, Yes, the problem is that when I update the torch, it is no longer compatible with the cuda and if I update the cuda other incompatibility errors already appear lol Do you know what combination they use and it doesn't cause problems? I think that with that I could try to install everything as you have it

ltdrdata commented 1 year ago

Hey, thanks for answer, Yes, the problem is that when I update the torch, it is no longer compatible with the cuda and if I update the cuda other incompatibility errors already appear lol Do you know what combination they use and it doesn't cause problems? I think that with that I could try to install everything as you have it

In my case, I'm using the nightly version, and there are no issues.

pytorch-triton               2.1.0+e6216047b8
torch                        2.1.0.dev20230812+cu118
torchaudio                   2.1.0.dev20230812+cu118
torchsde                     0.2.5
torchvision                  0.16.0.dev20230812+cu118
amortegui84 commented 1 year ago

ok maybe the nightly version i need to try, it's the only thing i haven't used. Thanks a lot

ltdrdata commented 1 year ago

ok maybe the nightly version i need to try, it's the only thing i haven't used. Thanks a lot

If you're going to use the nightly version, you'll need to build xformers yourself.