Closed derekcbr closed 3 months ago
Just reinstalled torch and torchvision with matched version. It works in forge. But using api to call still remain issues. Can not run api like in automatic1111. In Webui, api returns a list [True,{ad_model.....}]. I try to bring back {ad_enable, ad_model} but it says not a AdetailerUnit.
Exception in thread Thread-33 (thread_safe_predict): Traceback (most recent call last): File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "D:\AI\stable-diffusion-webui-forge\extensions\sd-forge-adetailer\lib_adetailer\common.py", line 29, in run self._return = self._target(self._args, File "D:\AI\stable-diffusion-webui-forge\extensions\sd-forge-adetailer\lib_adetailer\ultralytics.py", line 21, in thread_safe_predict return model.predict(image, conf=confidence, device=device, retina_masks=True) File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\ultralytics\engine\model.py", line 441, in predict return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream) File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\ultralytics\engine\predictor.py", line 168, in call return list(self.stream_inference(source, model, args, *kwargs)) # merge list of Result into one File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context response = gen.send(None) File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\ultralytics\engine\predictor.py", line 255, in stream_inference self.results = self.postprocess(preds, im, im0s) File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\ultralytics\models\yolo\detect\predict.py", line 25, in postprocess preds = ops.non_max_suppression( File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\ultralytics\utils\ops.py", line 282, in non_max_suppression i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms return torch.ops.torchvision.nms(boxes, scores, iou_threshold) File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch_ops.py", line 755, in call return self._op(args, **(kwargs or {})) NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] Meta: registered at /dev/null:440 [kernel] QuantizedCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback] AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback] AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback] AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback] AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback] AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback] AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback] AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback] AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback] AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:297 [backend fallback] AutocastCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:34 [kernel] AutocastCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:27 [kernel] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback] BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback] PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
Error running postprocess_image: D:\AI\stable-diffusion-webui-forge\extensions\sd-forge-adetailer\scripts\adetailer.py Traceback (most recent call last): File "D:\AI\stable-diffusion-webui-forge\modules\scripts.py", line 883, in postprocess_image script.postprocess_image(p, pp, script_args) File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "D:\AI\stable-diffusion-webui-forge\extensions\sd-forge-adetailer\scripts\adetailer.py", line 114, in postprocess_image processed |= afterdetailer_process_image(i, unit, p, pp, args) File "D:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "D:\AI\stable-diffusion-webui-forge\extensions\sd-forge-adetailer\lib_adetailer\process.py", line 106, in afterdetailer_process_image pred = predictor(ad_model, pp.image, detection_confidence_threshold, **kwargs) File "D:\AI\stable-diffusion-webui-forge\extensions\sd-forge-adetailer\lib_adetailer\ultralytics.py", line 36, in ultralytics_predict bboxes = pred[0].boxes.xyxy.cpu().numpy() TypeError: 'NoneType' object is not subscriptable
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:09<00:00, 2.06it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:09<00:00, 4.62it/s]