Closed seoultechJS closed 1 year ago
Ours : GTX TitanX Ubuntu 20.04 RuntimeError: CUDA out of memory. Tried to allocate 62.00 MiB (GPU 0; 11.92 GiB total capacity; 8.81 GiB already allocated; 50.31 MiB free; 9.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
We got this Error while running Replica or TumRGBD. In the Nice-SLAM model, this error only appeared in the dataset we constructed ourselves, and not in TumRGBD or Replica. How can we fix it?
I think the possible reason is they use different sampling strategy. In NICE-SLAM they select total 1000 rays for all keyframes in the window(mapping_window_size: 5). But in E-slam, they author select 4000 rays for every keyrame in a window(mapping_window_size: 20), so 4000*20 rays in total. So I suggest you can reduce the number or selected rays or change a stronger GPU.
Hello,
We actually select 4000 rays for all keyframes. That said, the best approach to deal with the memory limitation is indeed reducing the number of rays in mapping and tracking.
Thanks for fast apply! I'll try it what you said
Ours : GTX TitanX Ubuntu 20.04 RuntimeError: CUDA out of memory. Tried to allocate 62.00 MiB (GPU 0; 11.92 GiB total capacity; 8.81 GiB already allocated; 50.31 MiB free; 9.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
We got this Error while running Replica or TumRGBD. In the Nice-SLAM model, this error only appeared in the dataset we constructed ourselves, and not in TumRGBD or Replica. How can we fix it?