This may be due to my setup, the --single-process flag works fine.
Without --single-process the following error is emitted.
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245
Traceback (most recent call last):
File "C:\test\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\test\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\test\lib\site-packages\torch\multiprocessing\reductions.py", line 242, in reduce_tensor
event_sync_required) = storage._sharecuda()
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245
Traceback (most recent call last):
File "C:\test\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\test\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\test\lib\site-packages\torch\multiprocessing\reductions.py", line 242, in reduce_tensor
event_sync_required) = storage._sharecuda()
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245
Process _PredictWorker-1:
Traceback (most recent call last):
File "C:\test\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\g\vn\lc\detectron2-pipeline\pipeline\libs\async_predictor.py", line 32, in run
task = self.task_queue.get()
File "C:\test\lib\multiprocessing\queues.py", line 94, in get
res = self._recv_bytes()
File "C:\test\lib\multiprocessing\connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "C:\test\lib\multiprocessing\connection.py", line 306, in _recv_bytes
[ov.event], False, INFINITE)
It's hard to say what's going on. It looks like there is something wrong with your CUDA setup or Detectron2 setup with GPU. To be sure you can switch off GPU adding --gpus 0 --cpus 1 options.
I am getting the following error
This may be due to my setup, the --single-process flag works fine. Without --single-process the following error is emitted.
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245 Traceback (most recent call last): File "C:\test\lib\multiprocessing\queues.py", line 236, in _feed obj = _ForkingPickler.dumps(obj) File "C:\test\lib\multiprocessing\reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "C:\test\lib\site-packages\torch\multiprocessing\reductions.py", line 242, in reduce_tensor event_sync_required) = storage._sharecuda() RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245 Traceback (most recent call last): File "C:\test\lib\multiprocessing\queues.py", line 236, in _feed obj = _ForkingPickler.dumps(obj) File "C:\test\lib\multiprocessing\reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "C:\test\lib\site-packages\torch\multiprocessing\reductions.py", line 242, in reduce_tensor event_sync_required) = storage._sharecuda() RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245 Process _PredictWorker-1: Traceback (most recent call last): File "C:\test\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "C:\g\vn\lc\detectron2-pipeline\pipeline\libs\async_predictor.py", line 32, in run task = self.task_queue.get() File "C:\test\lib\multiprocessing\queues.py", line 94, in get res = self._recv_bytes() File "C:\test\lib\multiprocessing\connection.py", line 216, in recv_bytes buf = self._recv_bytes(maxlength) File "C:\test\lib\multiprocessing\connection.py", line 306, in _recv_bytes [ov.event], False, INFINITE)