naver / dust3r

DUSt3R: Geometric 3D Vision Made Easy
https://dust3r.europe.naverlabs.com/
Other
4.66k stars 517 forks source link

Getting torch.cuda.OutOfMemoryError using more than 16 images #28

Open rowellz opened 4 months ago

rowellz commented 4 months ago

Firstly, congrats to all the folks at Naver for their awesome accomplishments with Dust3r. It was very straight forward getting dust3r up and running, but I discovered i run into torch.cuda.OutOfMemoryError(s) when i try to process more than 16 images at once. I am running a rtx 3060 12GB, was wondering if anyone may know what I can do to resolve or debug an issue like this? I am running dust3r via docker-compose with PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128. Here is the full error:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.11 GiB (GPU 0; 11.76 GiB total capacity; 9.84 GiB already allocated; 932.69 MiB free; 10.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Any help or insight would be much appreciated. Again, thanks to the folks at Naver for their awesome work and releasing it!

ffrivera0 commented 4 months ago

Hi,

You can refer to the issue #1.

Some suggestions may help:

  1. Set device='cpu' below, the inference time will be longer. https://github.com/naver/dust3r/blob/7744dbae4475ebda78cd5cb9e6e60aa1eaefa5a4/demo.py#L136

  2. Use the "swin" or "oneref" method to make fewer pairs at global alignment as mentioned https://github.com/naver/dust3r/issues/1#issuecomment-1973308527

    image