VoxelCubes / PanelCleaner

An AI-powered tool to clean manga panels.
GNU General Public License v3.0
202 stars 16 forks source link

output_worker_error:1433 #92

Closed GH6 closed 3 months ago

GH6 commented 3 months ago

Describe the bug A clear and concise description of what the bug is.

To Reproduce Steps to reproduce the behavior:

  1. Run terminal and excecute pcleaner gui
  2. Import the image file and start with the default settings.

Expected behavior A clear and concise description of what you expected to happen.

Session Log `2024-05-03 09:41:15.761 | CRITICAL | pcleaner.gui.mainwindow_driver:output_worker_error:1433 - Encountered an error while processing files. Traceback (most recent call last):

File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\gui\worker_thread.py", line 141, in run result = self.fn(*self.args, self.kwargs) │ │ │ │ │ └ {'progress_callback': <PySide6.QtCore.SignalInstance progress(PyObject) at 0x000001D606FD7930>, 'abort_flag': <pcleaner.gui.w... │ │ │ │ └ <pcleaner.gui.worker_thread.Worker object at 0x000001D606F96F00> │ │ │ └ ([<Output.denoised_output: 18>, <Output.denoise_mask: 17>, <Output.write_output: 21>], WindowsPath('cleaned'), [<pcleaner.gui... │ │ └ <pcleaner.gui.worker_thread.Worker object at 0x000001D606F96F00> │ └ <bound method MainWindow.generate_output of <pcleaner.gui.mainwindow_driver.MainWindow(0x1d5968b9fb0, name="MainWindow") at 0... └ <pcleaner.gui.worker_thread.Worker object at 0x000001D606F96F00> File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\gui\mainwindow_driver.py", line 1315, in generate_output prc.generate_output( │ └ <function generate_output at 0x000001D5A0F8C360> └ <module 'pcleaner.gui.processing' from 'C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pc... File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\gui\processing.py", line 179, in generate_output ctm.model2annotations_gui( │ └ <function model2annotations_gui at 0x000001D5A7B371A0> └ <module 'pcleaner.gui.ctd_interface_gui' from 'C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packa... File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\gui\ctd_interface_gui.py", line 120, in model2annotations_gui process_image( └ <function process_image at 0x000001D5A7B36DE0> File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\ctd_interface.py", line 165, in process_image mask, mask_refined, blk_list = model( └ <pcleaner.comic_text_detector.inference.TextDetector object at 0x000001D5F2838890> File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) │ │ └ {'refine_mode': 1, 'keep_undetected_mask': True} │ └ (<pcleaner.comic_text_detector.inference.TextDetector object at 0x000001D5F2838890>, array([[[209, 230, 247], │ [209, 2... └ <function TextDetector.call at 0x000001D5A7B36980> File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\comic_text_detector\inference.py", line 179, in call blks = postprocess_yolo(blks, self.conf_thresh, self.nms_thresh, resize_ratio) │ │ │ │ │ │ └ (1.5621546961325967, 1.5625) │ │ │ │ │ └ 0.35 │ │ │ │ └ <pcleaner.comic_text_detector.inference.TextDetector object at 0x000001D5F2838890> │ │ │ └ 0.4 │ │ └ <pcleaner.comic_text_detector.inference.TextDetector object at 0x000001D5F2838890> │ └ tensor([[[5.5429e+00, 5.4785e+00, 1.2014e+01, ..., 6.7416e-06, │ 6.2631e-01, 3.6873e-01], │ [1.2198e+01, 5.66... └ <function postprocess_yolo at 0x000001D5A7B367A0> File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\comic_text_detector\inference.py", line 115, in postprocess_yolo det = non_max_suppression(det, conf_thresh, nms_thresh)[0] │ │ │ └ 0.35 │ │ └ 0.4 │ └ tensor([[[5.5429e+00, 5.4785e+00, 1.2014e+01, ..., 6.7416e-06, │ 6.2631e-01, 3.6873e-01], │ [1.2198e+01, 5.66... └ <function non_max_suppression at 0x000001D5A7992700> File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\pcleaner\comic_text_detector\utils\yolov5_utils.py", line 263, in non_max_suppression i = torchvision.ops.nms(boxes, scores, iouthres) # NMS │ │ │ │ │ └ 0.35 │ │ │ │ └ tensor([0.7387, 0.8657, 0.8514, 0.8112, 0.7943, 0.5317, 0.9092, 0.9132, 0.6692, │ │ │ │ 0.7932, 0.8644, 0.8590, 0.6468, 0.927... │ │ │ └ tensor([[4295.6753, 4802.3618, 4316.9858, 4843.4614], │ │ │ [4296.0088, 4802.6489, 4316.6699, 4843.6401], │ │ │ [4296.198... │ │ └ <function nms at 0x000001D5A1F18860> │ └ <module 'torchvision.ops' from 'C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchvisio... └ <module 'torchvision' from 'C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchvision\... File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchvision\ops\boxes.py", line 41, in nms return torch.ops.torchvision.nms(boxes, scores, iou_threshold) │ │ │ │ │ │ └ 0.35 │ │ │ │ │ └ tensor([0.7387, 0.8657, 0.8514, 0.8112, 0.7943, 0.5317, 0.9092, 0.9132, 0.6692, │ │ │ │ │ 0.7932, 0.8644, 0.8590, 0.6468, 0.927... │ │ │ │ └ tensor([[4295.6753, 4802.3618, 4316.9858, 4843.4614], │ │ │ │ [4296.0088, 4802.6489, 4316.6699, 4843.6401], │ │ │ │ [4296.198... │ │ │ └ <OpOverloadPacket(op='torchvision.nms')> │ │ └ <module 'torch.ops.torchvision' from 'torch.ops'> │ └ <module 'torch.ops' from '_ops.py'> └ <module 'torch' from 'C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\init.py'> File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch_ops.py", line 755, in call return self._op(args, (kwargs or {})) │ │ │ └ {} │ │ └ (tensor([[4295.6753, 4802.3618, 4316.9858, 4843.4614], │ │ [4296.0088, 4802.6489, 4316.6699, 4843.6401], │ │ [4296.19... │ └ <built-in method nms of PyCapsule object at 0x000001D606F933F0> └ <OpOverloadPacket(op='torchvision.nms')> NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher]. CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] Meta: registered at /dev/null:440 [kernel] QuantizedCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback] AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback] AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback] AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback] AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback] AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback] AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback] AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback] AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback] AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:297 [backend fallback] AutocastCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:34 [kernel] AutocastCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:27 [kernel] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback] BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback] PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]`

Screenshots If applicable, add screenshots to help explain your problem.

Additional context When I ran it as a binary, I didn't get any errors!

VoxelCubes commented 3 months ago

Looks like it tries to use cuda but then fails. Try this: https://pytorch.org/get-started/locally/

VoxelCubes commented 3 months ago

Good luck with Anaconda, I don't know anything about it. That's more for datascience guys, rather than programmers.

As for CUDA, the model is deterministic, meaning you'll get the same exact result, but 2 to 5 times faster, depending on your cpu vs gpu.

GH6 commented 3 months ago

I deleted pytorch and reinstalled it. It works fine! Stackoverflow said it was because torchaudio was not installed properly.

VoxelCubes commented 3 months ago

Well, stackoverflow would be wrong because pcleaner doesn't use torchaudio, just go ahead and remove it, but glad you got it working!