Open LucasColas opened 1 year ago
Most likely you just need to move the model over to the GPU in order to get it to work properly. Something like:
sam.to(device = "cuda")
Which you'd want to add just after creating the sam
variable, before creating the predictor. Normally you also need to do this for the input data (i.e. the image in this case), but the set_image(...)
function should handle that for you.
It works! :)
Hello,
It seenms segment anything doesn't use GPU. Whenever I run a boiler code like this one :
I open my task manager. CPU is used highly more than GPU. GPU is barely used (1%). However for a few seconds my gpu use peaks at 100% (but the code doesn't take only a few seconds to be executed).
It’s replying true for
torch.cuda.is_available()
, but overall training speed and task manager’s graph seems torch can’t utilize GPU well.