JunyuanDeng / NeRF-LOAM

[ICCV2023] NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping
MIT License
491 stars 29 forks source link

The issue of CUDA out of memory when running on the sequence00 of maicity #9

Closed laliwang closed 6 months ago

laliwang commented 7 months ago

The OOM error happens when I run the command: python demo/run.py configs/maicity/maicity_00.yaml. Since I have ran the code on the GPU of RTX 4090 with 24GB memory available, the OOM shouldn't happen as you mentioned in your README. And which confused me greatly is that there is still memory avaliable when OOM happens. The outputs are shown as follows:

RuntimeError: CUDA out of memory. Tried to allocate 5.54 GiB (GPU 0; 23.65 GiB total capacity; 1.91 GiB already allocated; 5.50 GiB free; 1.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

And it is worth noting that the OOM tends to happen in the process of 246/699 in sequence00 of maicity.

JunyuanDeng commented 7 months ago

Normally, this program has tested all on 24GB GPU, so it may not have problems.

I notice that you have 24GB total capacity and 1 GB allocated, but only 5.5GB free, maybe you have other programs that are running?