pytorch-labs / segment-anything-fast

A batched offline inference oriented version of segment-anything
Apache License 2.0
1.21k stars 72 forks source link

Inference on A10 GPU become slower than original SAM #86

Closed StephenHe1992 closed 1 year ago

StephenHe1992 commented 1 year ago

Hi, I run inference on Nvidia A10 GPU using sam_model_fast_registry to init model and SamAutomaticMaskGenerator to generate masks, but the inference time become slower than original SAM. Why?

original SAM elapsed time: 6.36s per image. SAM-fast elapsed time: 10.92s per image.

cpuhrsch commented 1 year ago

Hello @GovernorHo - Thank you for opening this issue. This is very similar to https://github.com/pytorch-labs/segment-anything-fast/issues/76 . I'll close this issue to keep the number of similar issues low, but please reopen if you don't think this applies to your ask.