Closed fenaux closed 1 year ago
Hi @fenaux, it looks like you were allocated a GPU in Colab that doesn't have enough memory for this part of the demo. You can try reducing grid_size
when estimating tracks with a segmentation mask. I'll fix this later in the notebook.
Thanks for your fast answer, on my computer (GPU is 16 GiB as the standard one on Colab) it works with grid_size = 60 (I did not try more) once this change made no more problem for the dense tracks
Relationship between grid_size and pred_tracks.size() is not clear to me with grid_size = 40 pred_tracks.size() = [1, number of frames, 161, 2] and with grid_size = 60 pred_tracks.size() = [1, number of frames, 381,2]
In the example with segmentation, we sample grid_size
grid_size
points on a regular grid. Later, these points are filtered using a binary segmentation mask, preserving only the points that lie inside the segmented area. This leaves us with 161 points out of 40 40 = 1600 initially sampled points and 381 out of 60 * 60 = 3600 initially sampled points.
Once again thanks for your prompt and clear answer
Thanks for sharing this amazing work On both Collab and my computer (Nvidia 3080 RTX) when I ty to run the segmentation part or the dense track I get OutOfMemoryError: CUDA out of memory. Tried to allocate 2.81 GiB (GPU 0; 15.74 GiB total capacity; 13.82 GiB already allocated; 605.50 MiB free; 13.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Is there a way to optimize that ? Thanks for your help