The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Apache License 2.0
12.41k
stars
1.14k
forks
source link
VRAM filling up after processing multiple Videos #451
I have noticed, that after processing a Video and building a new instance of sam2 the GPU VRAM gets filled up by every loop.
Deleting the predictor, inference state and emptying the cuda cache reduced the amount from 0.9GB to around 0.3GB per loop.
After filling up the GPU Memory SAM and the Task crash
I have noticed, that after processing a Video and building a new instance of sam2 the GPU VRAM gets filled up by every loop. Deleting the predictor, inference state and emptying the cuda cache reduced the amount from 0.9GB to around 0.3GB per loop.
After filling up the GPU Memory SAM and the Task crash