Eval step
100%|██████████| 6/6 [00:03<00:00, 1.55it/s]
eval stats: {'psnr': 6.496485767868177, 'mse': 0.2280503436923027}
Train step
0%| | 0/12800 [00:00<?, ?it/s]
Traceback (most recent call last):
File "opt.py", line 605, in
train_step()
File "opt.py", line 597, in train_step
grid.optim_background_step(lr_sigma_bg, lr_color_bg, beta=args.rms_beta, optim=args.bg_optim)
File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/svox2/svox2.py", line 2055, in optim_background_step
indexer = self._maybe_convert_sparse_grad_indexer(bg=True)
File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/svox2/svox2.py", line 2217, in _maybe_convert_sparse_grad_indexer
torch.count_nonzero(indexer).item()
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 7.93 GiB total capacity; 6.78 GiB already allocated; 362.56 MiB free; 6.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Eval step 100%|██████████| 6/6 [00:03<00:00, 1.55it/s] eval stats: {'psnr': 6.496485767868177, 'mse': 0.2280503436923027} Train step 0%| | 0/12800 [00:00<?, ?it/s] Traceback (most recent call last): File "opt.py", line 605, in
train_step()
File "opt.py", line 597, in train_step
grid.optim_background_step(lr_sigma_bg, lr_color_bg, beta=args.rms_beta, optim=args.bg_optim)
File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/svox2/svox2.py", line 2055, in optim_background_step
indexer = self._maybe_convert_sparse_grad_indexer(bg=True)
File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/svox2/svox2.py", line 2217, in _maybe_convert_sparse_grad_indexer
torch.count_nonzero(indexer).item()
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 7.93 GiB total capacity; 6.78 GiB already allocated; 362.56 MiB free; 6.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF