Closed Gilgamesh666666 closed 4 years ago
Hi @Gilgamesh666666 Could you specify which experimental results are inconsistent?
Hi @Gilgamesh666666 I have checked the code and find this problem was due to the hard selection strategy (you can find it in Sec 5 from our paper.) You need to uncomment the following code to enable the hard selection strategy during test to avoid having the selected keypoints lie too close to each other. https://github.com/XuyangBai/D3Feat/blob/3577482efbc5154affcd734e16b0d10a73560e37/models/D3Feat.py#L108-L113 Without this strategy, the performance under a small number of keypoint might degenerate because of the non-uniform distribution of keypoints. And for the inlier ratio, we have explained in the paper that the detected keypoints have been properly ranked, and the top points receive higher probability to be matched, thus our method can achieve improved results when smaller number of points is considered.
Please uncommon the code snippets and retry. You will get a similar result with the number in the paper. And feel free to try the model trained using circle loss, which will give a better result.
Best, Xuyang
Hi, @XuyangBai Thanks for your reply. The problem is solved :) Best, Gilgamesh
Hi,
I run test_kitti.py but I have the same issue.
This is what I obtained 09/30 15:51:08 Total loss: 0.0, RTE: 0.1132173524658441, var: 0.007149245639922374, RRE: 0.5007891837937957, var: 0.16424901090433636, Success: 553.0 / 555 (99.63963963963964 %)
Then I also implemented what explained here above (uncomment code). But I get a similar result. 09/30 15:55:25 Total loss: 0.0, RTE: 0.11517520964980987, var: 0.007538324615519327, RRE: 0.5016338146911243, var: 0.19387913899965886, Success: 551.0 / 555 (99.27927927927928 %)
Cheers
I noticed that here:
use_random_points is set to False, and lines 265 to 270 are commented. I supposed that but uncommenting these ones and commenting the ones after, it's for testing the method using all keypoints. Correct?
Sorry, i accidentally pressed 'Enter'. The following is the text.
Hi,Xuyang Thanks for your sharing. I have run your code according to the README.md to test the 3DMatch dataset, but the results i got are not consistent with those in the paper. Here is the results i got in my machine.
Rand 250: recall: 78.73045633845116% average num inliers: 9.363852426181337 average num inliers ratio: 0.19733789310284588 registration recall: 10.01%
Rand 1000: recall: 91.159575532389% average num inliers: 39.14157202336968 average num inliers ratio: 0.2830029677777195 registration recall: 44.10%
Rand 5000: recall: 94.89696519206765% average num inliers: 194.81370930163627 average num inliers ratio: 0.3952895905324679 registration recall: 79.83%
pred 250: recall: 90.66840429898846% average num inliers: 22.28248399643017 average num inliers ratio: 0.5087867387749848 registration recall: 65.44%
pred 1000: recall: 93.50181475115426% average num inliers: 77.3823343172171 average num inliers ratio: 0.4946857046798971 registration recall: 83.82%
pred 5000: recall: 95.41775054399773% average num inliers: 259.560668054325 average num inliers ratio: 0.4549467720974129 registration recall: 88.58%
As you see, some results are better than the paper results, some are whose than the paper results, but the fluctuation is large. Also, the inlier ratio decrease with the numbers of keypoints increase. So i am confused about this result. I swear i did not change any code.
Best Gilgamesh