Open erictan23 opened 9 months ago
Not a maintainer, however it looks like the reason for the error is that you simply do not have enough VRAM on your system to perform inference on a batch of that size, try to reduce the batch size until you get to a load that your GPU is comfortable with.
same problem. 12G VRAM. Loop 2 times will run out of mem.
use 'plt.close('all')' maybe ok
Hello! Thank you for your work, I would like to ask some questions regarding multiple images predicting using the grounded_sam_demo. I have looked through previous issues, there seem to be some way to perform multi-batch inference? I am not very sure how it works, but I tried to create a for loop in my image_paths to predict all the .jpg in my image files. However, when I am on my 2nd loop, cuda ran out of memory, hence would like to see if anyone has any solution to this?
This was what I have adjusted mainly in grounded_sam_demo.py
This shows that cuda ran out of memory...