facebookresearch / co-tracker

CoTracker is a model for tracking any point (pixel) on a video.
https://co-tracker.github.io/
Other
2.72k stars 195 forks source link

GPU out of memory #12

Closed fenaux closed 1 year ago

fenaux commented 1 year ago

Thanks for sharing this amazing work On both Collab and my computer (Nvidia 3080 RTX) when I ty to run the segmentation part or the dense track I get OutOfMemoryError: CUDA out of memory. Tried to allocate 2.81 GiB (GPU 0; 15.74 GiB total capacity; 13.82 GiB already allocated; 605.50 MiB free; 13.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Is there a way to optimize that ? Thanks for your help

nikitakaraevv commented 1 year ago

Hi @fenaux, it looks like you were allocated a GPU in Colab that doesn't have enough memory for this part of the demo. You can try reducing grid_size when estimating tracks with a segmentation mask. I'll fix this later in the notebook.

fenaux commented 1 year ago

Thanks for your fast answer, on my computer (GPU is 16 GiB as the standard one on Colab) it works with grid_size = 60 (I did not try more) once this change made no more problem for the dense tracks

Relationship between grid_size and pred_tracks.size() is not clear to me with grid_size = 40 pred_tracks.size() = [1, number of frames, 161, 2] and with grid_size = 60 pred_tracks.size() = [1, number of frames, 381,2]

nikitakaraevv commented 1 year ago

In the example with segmentation, we sample grid_size grid_size points on a regular grid. Later, these points are filtered using a binary segmentation mask, preserving only the points that lie inside the segmented area. This leaves us with 161 points out of 40 40 = 1600 initially sampled points and 381 out of 60 * 60 = 3600 initially sampled points.

fenaux commented 1 year ago

Once again thanks for your prompt and clear answer