idiap / ESLAM

Apache License 2.0
204 stars 21 forks source link

RuntimeError : Cuda out of memory #5

Closed seoultechJS closed 1 year ago

seoultechJS commented 1 year ago

Ours : GTX TitanX Ubuntu 20.04 RuntimeError: CUDA out of memory. Tried to allocate 62.00 MiB (GPU 0; 11.92 GiB total capacity; 8.81 GiB already allocated; 50.31 MiB free; 9.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

We got this Error while running Replica or TumRGBD. In the Nice-SLAM model, this error only appeared in the dataset we constructed ourselves, and not in TumRGBD or Replica. How can we fix it?

shaoxiang777 commented 1 year ago

Ours : GTX TitanX Ubuntu 20.04 RuntimeError: CUDA out of memory. Tried to allocate 62.00 MiB (GPU 0; 11.92 GiB total capacity; 8.81 GiB already allocated; 50.31 MiB free; 9.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

We got this Error while running Replica or TumRGBD. In the Nice-SLAM model, this error only appeared in the dataset we constructed ourselves, and not in TumRGBD or Replica. How can we fix it?

I think the possible reason is they use different sampling strategy. In NICE-SLAM they select total 1000 rays for all keyframes in the window(mapping_window_size: 5). But in E-slam, they author select 4000 rays for every keyrame in a window(mapping_window_size: 20), so 4000*20 rays in total. So I suggest you can reduce the number or selected rays or change a stronger GPU.

MohammadJohari commented 1 year ago

Hello,

We actually select 4000 rays for all keyframes. That said, the best approach to deal with the memory limitation is indeed reducing the number of rays in mapping and tracking.

seoultechJS commented 1 year ago

Thanks for fast apply! I'll try it what you said