Open lyxxxyl opened 1 month ago
When I do feature extraction on two images using the tf version, the shapes of keypoints, descriptors, scores are as follows
And the result of torch version is as follows
For images with many feature points, the difference is even greater
Hi, this is probably due to different confidence threshold to decide how many keypoints you are keeping. Please check that you are using the same thresholds.
Note that there was already a similar issue on the topic: https://github.com/rpautrat/SuperPoint/issues/316
I have found when doing feature extraction on the same images that the tf version will all have a greater number of feature points than the pytorch version, the weights I use are sp_v6 and superpoint_v6_from_tf.pth respectively and when the input is a blank image, the tf result will be the points on the four corners of the image and the pytorch result will be empty.