pytorch-labs / segment-anything-fast

A batched offline inference oriented version of segment-anything
Apache License 2.0
1.19k stars 70 forks source link

Inference on A10 GPU become slower than original SAM #86

Closed GovernorHo closed 10 months ago

GovernorHo commented 10 months ago

Hi, I run inference on Nvidia A10 GPU using sam_model_fast_registry to init model and SamAutomaticMaskGenerator to generate masks, but the inference time become slower than original SAM. Why?

original SAM elapsed time: 6.36s per image. SAM-fast elapsed time: 10.92s per image.

cpuhrsch commented 10 months ago

Hello @GovernorHo - Thank you for opening this issue. This is very similar to https://github.com/pytorch-labs/segment-anything-fast/issues/76 . I'll close this issue to keep the number of similar issues low, but please reopen if you don't think this applies to your ask.