Eval: Total FPS 0.4507488987879184
Process Process-3:
Traceback (most recent call last):
File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, *self._kwargs)
File "/home/sinha/Dokumente/HygDro/MonoGS_09.08/MonoGS/utils/slam_backend.py", line 481, in run
self.frontend_queue.get()
File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/multiprocessing/queues.py", line 122, in get
return _ForkingPickler.loads(res)
File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 113, in rebuild_cuda_tensor
storage = storage_cls._new_shared_cuda(
File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/site-packages/torch/storage.py", line 779, in _new_shared_cuda
return torch._UntypedStorage._new_shared_cuda(args, kwargs)
RuntimeError: CUDA error: invalid device context
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.**
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
MonoGS: Backend stopped and joined the main thread
GUI: Received terminate signal
GUI: Closing Visualization
X Error of failed request: BadDrawable (invalid Pixmap or Window parameter)
Major opcode of failed request: 72 (X_PutImage)
Resource id in failed request: 0x300007
Serial number of failed request: 4729
Current serial number in output stream: 4759
MonoGS: GUI Stopped and joined the main thread
MonoGS: Done.
Eval: Total FPS 0.4507488987879184 Process Process-3:
Traceback (most recent call last): File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, *self._kwargs) File "/home/sinha/Dokumente/HygDro/MonoGS_09.08/MonoGS/utils/slam_backend.py", line 481, in run self.frontend_queue.get() File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/multiprocessing/queues.py", line 122, in get return _ForkingPickler.loads(res) File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 113, in rebuild_cuda_tensor storage = storage_cls._new_shared_cuda( File "/home/sinha/.conda/envs/MonoGS_v13_gg/lib/python3.10/site-packages/torch/storage.py", line 779, in _new_shared_cuda return torch._UntypedStorage._new_shared_cuda(args, kwargs) RuntimeError: CUDA error: invalid device context CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.** [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors] MonoGS: Backend stopped and joined the main thread
GUI: Received terminate signal GUI: Closing Visualization X Error of failed request: BadDrawable (invalid Pixmap or Window parameter) Major opcode of failed request: 72 (X_PutImage) Resource id in failed request: 0x300007 Serial number of failed request: 4729 Current serial number in output stream: 4759 MonoGS: GUI Stopped and joined the main thread MonoGS: Done.