magicleap / SuperGluePretrainedNetwork

SuperGlue: Learning Feature Matching with Graph Neural Networks (CVPR 2020, Oral)
Other
3.31k stars 672 forks source link

About the RANSAC #56

Closed atztao closed 3 years ago

atztao commented 3 years ago

I think it's great job. But i have a problem, i test the pretrained model, i foud most time the output not have lots of erro match, so you used the RANSAC in the model to do it or others way to remove the erro matches ?

sarlinpe commented 3 years ago

Please refer to the paper and to the code. We use RANSAC to find the geometric inliers and estimate the transformation.

atztao commented 3 years ago

Please refer to the paper and to the code. We use RANSAC to find the geometric inliers and estimate the transformation.

Thanks to your reply. And how do you set the unmatch and match keypoints for the inputs, that's balance? Also, how long will it take to converge? 2k or 20k?

sarlinpe commented 3 years ago

Sorry, I don't understand your questions.

atztao commented 3 years ago

Oh, sorry. I mean that if i used the the num of 1024 keypoints for input, the match pairs should have 512 or more ?

sarlinpe commented 3 years ago

SuperGlue accepts an arbitrary number of keypoints in each image of the pair.

atztao commented 3 years ago

Hey, i train on the SIFT used bs=8 but is not good performance or it not work, but when i used bs =1 is good, i trained 10k.

sarlinpe commented 3 years ago

That is quite surprising: larger batches should give better performance. I also don't expect only 10k iterations to be sufficient for good performance.

atztao commented 3 years ago

The input of num of the match should be equal the num of the unmatch?

sarlinpe commented 3 years ago

I am not sure to understand your question.

atztao commented 3 years ago

Hh, i mean the match pair vs the unmatch pair should be equal?

sarlinpe commented 3 years ago

No, these numbers can be different.

atztao commented 3 years ago

Finally, this is a good job, thank you. But i have another questions, that the network is easy to get 'NAN' loss, if not add the bn or relu in the last layer.

sarlinpe commented 3 years ago

I this might be due to noisy ground truth supervision. I recommend to monitor the magnitude of the gradients in the network and the magnitude of the score matrix. Please close this issue if your problem is solved.

atztao commented 3 years ago

Thank you very much.