Open GitChanyoung opened 8 months ago
@GitChanyoung, thanks for your interest! We will take a look for segment everything.
I experienced similar thing, it seems that code provided in [EfficientSAM_segment_everything_example.ipynb] is not adapted to gpu , i tried to move the network to gpu but this does not help a lot( probably because my modification not complete ), saving around ten seconds but it still costs around five to ten seconds. Moreover, to the same image, it runs a little faster after first run.
Hi, thanks for your interest, the segment-anything notebook example is work-in-progress and currently it only supports CPU version (you can find that tensors are forced to cpu in the function) with a naive & straight forward algorithm. We will adapt a quicker version in the future.
When running on a CPU the Tiny EfficienetSAM, the performance of this model is significantly slower than that of FastSAM-S. After conducting a thorough comparison between FastSAM and EfficientSAM.
The following result is running the code on Google Collab:
Inference using: efficientsam_ti_cpu.jit
Input size: torch.Size([3, 512, 1024])
Preprocess Time: 79.8783 ms
Inference Time: 6939.1549 ms
same here, more than 30s on rtx3090, not efficient at all
same here, more than 30s on rtx3090, not efficient at all
on my 3090ti,i7 12700, it's 30-40ms per image. The problem is that the speed will be slower when running the first inference, I'm not sure why
Hello, I'm grateful for your research. I tried segment-anything with the code you shared. I though it would be fast, but it shows a very slow speed with an average of 13000ms. Can you tell me the reason why?
Output:
image_size : 640x640 GPU : RTX A6000