when i set two points to segment the cat.jpg in python demo, i can get the output like this
however, when i set two same points to segment in my c++ inference demo, it only gives me one segmentation, like this:
in my c++ inference demo, i define the two points in vector, and after processd by function of apply_coords, i defined the point_coords input and pointlabels input as float[], like :float points[1][2][2] and float labels_[1][1][2], and copy the host data to device data to infer, but the outcome mask which only has one segmentation. I guess that way of points' definition is wrong in my c++ inference demo, but i am not sure for this.
I am looking forward to get some suggestions from you, best wishes for you!
when i set two points to segment the cat.jpg in python demo, i can get the output like this![efficientvit_sam_demo_tensorrt](https://github.com/mit-han-lab/efficientvit/assets/80732290/e241acfe-f732-44bd-ab0f-fd934368aefe)
however, when i set two same points to segment in my c++ inference demo, it only gives me one segmentation, like this:![image](https://github.com/mit-han-lab/efficientvit/assets/80732290/15dddd55-bead-4724-8466-7585f23d20ec)
in my c++ inference demo, i define the two points in vector, and after processd by function of apply_coords, i defined the point_coords input and pointlabels input as float[], like :float points[1][2][2] and float labels_[1][1][2], and copy the host data to device data to infer, but the outcome mask which only has one segmentation. I guess that way of points' definition is wrong in my c++ inference demo, but i am not sure for this.
I am looking forward to get some suggestions from you, best wishes for you!