The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
I tried prompting the model with a vague segmentation mask that I obtained from another model. The prompt isn’t very accurate, but it at least covers the object of interest. I expected SAM2 to refine the mask more precisely, but the results are not better.
Here is the mask prompt:
And result from SAM:
There are small patches scattered on the right side, and the left side of the mask is completely off.
What could be the reason? Does SAM2 always need a point/box to refine the masks?
Hi,
I tried prompting the model with a vague segmentation mask that I obtained from another model. The prompt isn’t very accurate, but it at least covers the object of interest. I expected SAM2 to refine the mask more precisely, but the results are not better.
Here is the mask prompt:
And result from SAM:
There are small patches scattered on the right side, and the left side of the mask is completely off.
What could be the reason? Does SAM2 always need a point/box to refine the masks?