Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
Meta: registered at /dev/null:440 [kernel]
QuantizedCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback]
AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback]
AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback]
AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback]
AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback]
AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback]
AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback]
AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback]
AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:297 [backend fallback]
AutocastCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:34 [kernel]
AutocastCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:27 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON__init.py", line 160, in generate
mask = pipe(
^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON\model\cloth_masker.py", line 257, in call
preprocess_results = self.preprocess_image(image)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON\model\cloth_masker.py", line 182, in preprocess_image
'densepose': self.densepose_processor(image_or_path, resize=1024),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON\model\DensePose__init.py", line 140, in call__
outputs = self.predictor(img)["instances"]
^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\engine\defaults.py", line 317, in call__
predictions = self.model([inputs])[0]
^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\meta_arch\rcnn.py", line 146, in forward
return self.inference(batched_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\metaarch\rcnn.py", line 204, in inference
proposals, = self.proposal_generator(images, features, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\proposal_generator\rpn.py", line 477, in forward
proposals = self.predict_proposals(
^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\proposal_generator\rpn.py", line 503, in predict_proposals
return find_top_rpn_proposals(
^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\proposal_generator\proposal_utils.py", line 116, in find_top_rpn_proposals
keep = batched_nms(boxes.tensor, scores_per_img, lvl, nms_thresh)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\layers\nms.py", line 21, in batched_nms
return box_ops.batched_nms(boxes.float(), scores, idxs, iou_threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torchvision\ops\boxes.py", line 75, in batched_nms
return _batched_nms_coordinate_trick(boxes, scores, idxs, iou_threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\jit_trace.py", line 1240, in wrapper
return fn(args, kwargs)
^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torchvision\ops\boxes.py", line 94, in _batched_nms_coordinate_trick
keep = nms(boxes_for_nms, scores, iou_threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch_ops.py", line 755, in call
return self._op(*args, **(kwargs or {}))
I'm getting this errors wen running the workflow.
Error occurred when executing AutoMasker:
Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] Meta: registered at /dev/null:440 [kernel] QuantizedCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback] AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback] AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback] AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback] AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback] AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback] AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback] AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback] AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback] AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:297 [backend fallback] AutocastCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:34 [kernel] AutocastCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:27 [kernel] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback] BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback] PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON__init.py", line 160, in generate mask = pipe( ^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON\model\cloth_masker.py", line 257, in call preprocess_results = self.preprocess_image(image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON\model\cloth_masker.py", line 182, in preprocess_image 'densepose': self.densepose_processor(image_or_path, resize=1024), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\Comfyui-CatVTON\model\DensePose__init.py", line 140, in call__ outputs = self.predictor(img)["instances"] ^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\engine\defaults.py", line 317, in call__ predictions = self.model([inputs])[0] ^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\meta_arch\rcnn.py", line 146, in forward return self.inference(batched_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\metaarch\rcnn.py", line 204, in inference proposals, = self.proposal_generator(images, features, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\proposal_generator\rpn.py", line 477, in forward proposals = self.predict_proposals( ^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\proposal_generator\rpn.py", line 503, in predict_proposals return find_top_rpn_proposals( ^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\modeling\proposal_generator\proposal_utils.py", line 116, in find_top_rpn_proposals keep = batched_nms(boxes.tensor, scores_per_img, lvl, nms_thresh) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\detectron2\layers\nms.py", line 21, in batched_nms return box_ops.batched_nms(boxes.float(), scores, idxs, iou_threshold) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torchvision\ops\boxes.py", line 75, in batched_nms return _batched_nms_coordinate_trick(boxes, scores, idxs, iou_threshold) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\jit_trace.py", line 1240, in wrapper return fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torchvision\ops\boxes.py", line 94, in _batched_nms_coordinate_trick keep = nms(boxes_for_nms, scores, iou_threshold) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torchvision\ops\boxes.py", line 41, in nms return torch.ops.torchvision.nms(boxes, scores, iou_threshold) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch_ops.py", line 755, in call return self._op(*args, **(kwargs or {}))