naver / dust3r

DUSt3R: Geometric 3D Vision Made Easy
https://dust3r.europe.naverlabs.com/
Other
5.1k stars 556 forks source link

CUDA OOM #1

Open ducha-aiki opened 7 months ago

ducha-aiki commented 7 months ago

Hi,

The performance is really amazing on the few image pairs I have tried. However, when I moved to a bigger scenes (29 images), it crashes with CUDA OOM on 16Gb V100. Any recommendations how can I run it?

  File "/home/old-ufo/dev/dust3r/dust3r/cloud_opt/optimizer.py", line 176, in forward
    aligned_pred_i = geotrf(pw_poses, pw_adapt * self._stacked_pred_i)
  File "/home/old-ufo/dev/dust3r/dust3r/utils/geometry.py", line 86, in geotrf
    pts = pts @ Trf[..., :-1, :] + Trf[..., -1:, :]

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.38 GiB. GPU 0 has a total capacity of 15.77 GiB of which 775.88 MiB is free. Including non-PyTorch memory, this process has 15.01 GiB memory in use. Of the allocated memory 13.70 GiB is allocated by PyTorch, and 922.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
jerome-revaud commented 7 months ago

Oh... No easy fix that i can see, we usually perform our experiments on A100 with 80GB so we never particularly optimized the memory, sorry ^^' With 80GB, we could optimize scenes with 200+ images.

jerome-revaud commented 7 months ago

maybe @yocabon would have a better idea?

jerome-revaud commented 7 months ago

One solution, kindly suggested by my colleague Romain Bregier, is to have the global alignment running on CPU. Will be slower but will not crash...

ducha-aiki commented 7 months ago

Thank you, will try it. And for GPU - is there a way to use multi GPU? I have a server with 8 V100 = 8 x 16Gb, not A100 unfortunately

yocabon commented 7 months ago

Hi, I updated the demo script to expose the "scene_graph" parameter. By default, we make all possible pairs, but it explodes when you add many images. Use the "sliding window" or "one reference" method to make fewer pairs, then it should fit in memory.

No we didn't implement multi gpu for the inference.

ducha-aiki commented 7 months ago

Oh, that's super useful, thank you!

nickponline commented 7 months ago

How do we set the global alignment to run on CPU?

nickponline commented 7 months ago

I think maybe this scene = global_aligner(output, device="cpu", mode=mode)

nickponline commented 7 months ago

That seems to work ^

ducha-aiki commented 7 months ago

@nickponline Just tried 36 images on CPU, now I have the OOM CPU error on a machine with 120 Gb. Is there a way to reduce number of points besides using 224x224 resolution?

RuntimeError: [enforce fail at alloc_cpu.cpp:83] err == 0. DefaultCPUAllocator: can't allocate memory: you tried to allocate 2005401600 bytes. Error code 12 (Cannot allocate memory)
jerome-revaud commented 7 months ago

@ducha-aiki do you have a scene covisibility graph? if so, this would greatly reduce the memory usage. On an A100 with 80GB, we are able to optimize scenes with 200+ images when we use 10NN per image.

jerome-revaud commented 7 months ago

we had this implemented here https://github.com/naver/dust3r/blob/b6eb95705c2948750283638d5fbb5c12a3a8bf21/dust3r/image_pairs.py#L11 but it didn't make it in the final version ...

ducha-aiki commented 7 months ago

I don't have a co-visibility graph, but I can probably run DINOv2, or SALAD to get an estimation. Thank you for the suggestion

xuyanging commented 1 week ago

Oh... No easy fix that i can see, we usually perform our experiments on A100 with 80GB so we never particularly optimized the memory, sorry ^^' With 80GB, we could optimize scenes with 200+ images.

Good, I replace 3090 laptop with H100 and it works!