Thanks for this great work! When I train Gaussian Grouping which MipNeRF 360 dataset, I often encounter this error. Any idea why this happens?
Traceback (most recent call last):
File "train.py", line 262, in <module>
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, args.use_wandb)
File "train.py", line 147, in training
gaussians.optimizer.step()
File "/home/wjlyu/miniconda3/envs/grouping/lib/python3.8/site-packages/torch/optim/optimizer.py", line 113, in wrapper
return func(*args, **kwargs)
File "/home/wjlyu/miniconda3/envs/grouping/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/wjlyu/miniconda3/envs/grouping/lib/python3.8/site-packages/torch/optim/adam.py", line 113, in step
self._cuda_graph_capture_health_check()
File "/home/wjlyu/miniconda3/envs/grouping/lib/python3.8/site-packages/torch/optim/optimizer.py", line 86, in _cuda_graph_capture_health_check
capturing = torch.cuda.is_current_stream_capturing()
File "/home/wjlyu/miniconda3/envs/grouping/lib/python3.8/site-packages/torch/cuda/graphs.py", line 24, in is_current_stream_capturing
return _cuda_isCurrentStreamCapturing()
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Thanks for this great work! When I train Gaussian Grouping which MipNeRF 360 dataset, I often encounter this error. Any idea why this happens?
Thank you!