muskie82 / MonoGS

[CVPR'24 Highlight & Best Demo Award] Gaussian Splatting SLAM
https://rmurai.co.uk/projects/GaussianSplattingSLAM/
Other
1.42k stars 130 forks source link

CUDA out of memory #101

Open Ysc-shark opened 6 months ago

Ysc-shark commented 6 months ago

When I run the Quick Demo, the following CUDA error message is displayed: RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 23.68 GiB total capacity; 18.25 GiB already allocated; 22.06 MiB free; 19.86 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I monitored the GPU during the process and noticed that as the frame count increases, the VRAM gradually accumulates until an error occurs. I am using an RTX 3090. Is this a normal phenomenon, or is there a problem with VRAM not being released properly during the process?

Additionally, if I switch to smaller data, the accumulation still occurs but does not max out the VRAM; if I switch to the eval model and run it in headless mode, the above issue does not occur.

UltraHertzz commented 3 months ago

did you run on a server without screen? Or you can run in headless mode first and then show the pcd using other gaussian renderer.