Closed VisStudio closed 11 months ago
Image embedding (preprocessing) does indeed take some time, but subsequent inference is much faster. Try using the MobileSAM model, which significantly speeds up the overall process.
Thanks a lot for your priceless advice
Every time I import a new image into the code, it loads particularly slowly, especially when executing the sam.getMask() function. But when I select different points on the same image, it is processed quickly. I wonder why? Is it because the code has to execute the decoder model every time a new image is loaded? compare: When I enter the same bounding box but use a different photo, the code takes about 11200ms to process it; When I use the same photo but different points,the code takes about 50ms to process it。 Preprocess device:cpu(i5-13400) Sam device:cpu(i5-13400) When both I use CPUs for Preprocess and Sam, I wanted to ask if there was any way to reduce processing time?