Open ccy-ustb opened 1 year ago
Hi! If the GPU memory is really small, you may resize the images to smaller size by changing the parameter "--max_image_size", whose default is 1600 here: https://github.com/cvg/limap/blob/main/cfgs/triangulation/default.yaml#L15. However, this will result in degraded performance.
In the implementation only one image is inferred at each time (we disable parallelization by default at feature extraction and matching) so there is unfortunately nothing to do to reduce memory requirements without degrading the performance if it exceeds the memory limit of your GPU.
Thanks a lot!
Hello, when I build a line map on a dataset with over 1000 images, it will appear:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 938.00 MiB (GPU 0; 5.77 GiB total capacity; 1.07 GiB already allocated; 879.31 MiB free; 3.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
So, I would like to inquire if it is possible to change certain parameters to accommodate small memory GPUs?