Open erictan23 opened 5 months ago
Not a maintainer, however it looks like the reason for the error is that you simply do not have enough VRAM on your system to perform inference on a batch of that size, try to reduce the batch size until you get to a load that your GPU is comfortable with.
same problem. 12G VRAM. Loop 2 times will run out of mem.
use 'plt.close('all')' maybe ok
Hello! Thank you for your work, I would like to ask some questions regarding multiple images predicting using the grounded_sam_demo. I have looked through previous issues, there seem to be some way to perform multi-batch inference? I am not very sure how it works, but I tried to create a for loop in my image_paths to predict all the .jpg in my image files. However, when I am on my 2nd loop, cuda ran out of memory, hence would like to see if anyone has any solution to this?
This was what I have adjusted mainly in grounded_sam_demo.py![image](https://github.com/IDEA-Research/Grounded-Segment-Anything/assets/74151994/624b03e9-bdd6-48bf-bab1-55aca1d70707)
This shows that cuda ran out of memory...![image](https://github.com/IDEA-Research/Grounded-Segment-Anything/assets/74151994/4fce19c8-3b81-4b32-a32d-fe8eb7137f66)