Closed Fanie-Visagie closed 11 months ago
set CUDA_BLOCKING = 1
(gaussian_splatting) F:\gaussian-splatting>python train.py -s data\burner Optimizing Output folder: ./output/615d42c9-9 [18/10 20:51:30] Tensorboard not available: not logging progress [18/10 20:51:30] Reading camera 211/211 [18/10 20:51:33] Loading Training Cameras [18/10 20:51:33] [ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K. If this is not desired, please explicitly specify '--resolution/-r' as 1 [18/10 20:51:33] Loading Test Cameras [18/10 20:53:38] Number of points at initialisation : 164424 [18/10 20:53:38] Training progress: 23%|█████████▊ | 7000/30000 [07:37<30:25, 12.60it/s, Loss=0.0900410] [ITER 7000] Evaluating train: L1 0.04548098742961884 PSNR 22.65747871398926 [18/10 21:01:17]
[ITER 7000] Saving Gaussians [18/10 21:01:17] Traceback (most recent call last):
Been getting BSOD all night !! File "train.py", line 216, in training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from) File "train.py", line 83, in training render_pkg = render(viewpoint_cam, gaussians, pipe, background) File "F:\gaussian-splatting\gaussian_rendererinit.py", line 99, in render "visibility_filter" : radii > 0, RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Training progress: 23%|█████████▊ | 7000/30000 [07:44<25:24, 15.09it/s, Loss=0.0900410]
(gaussian_splatting) F:\gaussian-splatting>
Hi, I meet the same problem. Could you share what is the reason and how to solve it? Many thanks!
have met the same issues. Any ideas on how to resolve it? Thanks!!
Try to ensure all the parameters being in the same device
@raywzy hi!Could you please explain in detail how to solve this problem? I have also met the same problem, and I am struggling to know how to solve it.
same problem
same problem, how to solve it?
Try to ensure all the parameters being in the same device
Are you saying to put parameters such as means,quat,scale,opcity,color into the video memory
may be something wrong with code of diff-gaussian-splatting submodule
(gaussian_splatting) F:\gaussian-splatting>python train.py -s data\burner Optimizing Output folder: ./output/615d42c9-9 [18/10 20:51:30] Tensorboard not available: not logging progress [18/10 20:51:30] Reading camera 211/211 [18/10 20:51:33] Loading Training Cameras [18/10 20:51:33] [ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K. If this is not desired, please explicitly specify '--resolution/-r' as 1 [18/10 20:51:33] Loading Test Cameras [18/10 20:53:38] Number of points at initialisation : 164424 [18/10 20:53:38] Training progress: 23%|█████████▊ | 7000/30000 [07:37<30:25, 12.60it/s, Loss=0.0900410] [ITER 7000] Evaluating train: L1 0.04548098742961884 PSNR 22.65747871398926 [18/10 21:01:17]
[ITER 7000] Saving Gaussians [18/10 21:01:17] Traceback (most recent call last):
Been getting BSOD all night !! File "train.py", line 216, in
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
File "train.py", line 83, in training
render_pkg = render(viewpoint_cam, gaussians, pipe, background)
File "F:\gaussian-splatting\gaussian_renderer__init__.py", line 99, in render
"visibility_filter" : radii > 0,
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Training progress: 23%|█████████▊ | 7000/30000 [07:44<25:24, 15.09it/s, Loss=0.0900410]
(gaussian_splatting) F:\gaussian-splatting>