The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Apache License 2.0
47.38k
stars
5.61k
forks
source link
does sam supports multiple foreground and background points as prompt? #432
I have been trying to notice the change in performance as I change the number of foreground and background points. I am passing a 384 x 384 image to the model. I am passing the coordinates of the model as [[x1,y1],[x2,y2],[x3,y3].....] and corresponding labels [ 0, 1, 1, 1, 0, ...]. But the performance is decreasing as I am increasing the number of points, which is counter-intuitive. Can anyone help me think of a reason why it is happening?
I have been trying to notice the change in performance as I change the number of foreground and background points. I am passing a 384 x 384 image to the model. I am passing the coordinates of the model as [[x1,y1],[x2,y2],[x3,y3].....] and corresponding labels [ 0, 1, 1, 1, 0, ...]. But the performance is decreasing as I am increasing the number of points, which is counter-intuitive. Can anyone help me think of a reason why it is happening?