Closed ufukefe closed 2 years ago
Hi @ufukefe, We have attempted to solve this issue of feature matching for in-plane rotated images as well as feature matching in images with high-viewpoint changes. We are able to achieve high viewpoint invariance using rotation homographies during training and matching on orthographic view instead of perspective view. You can find our work here - https://github.com/UditSinghParihar/RoRD. Hope this may help solve your issue !
I tried the suggestion here. https://github.com/kyuhyoung/SuperGluePretrainedNetwork?ts=2
Hi, thanks for your excellent work and for sharing it!
I have spent some time testing the D2-Net algorithm on my dataset, which is consisting of 36 UAV images of a single building. The dataset includes 2 different illuminations 2 different scales 3 different viewpoints on the yaw axis 3 different viewpoints on the pitch axis. Unfortunately, I have observed that D2-Net could not find suitable matches, especially in the cases of in-plane rotations and extreme viewpoint changes, and I want to improve it. I want to ask two questions with your permission.
1) Do you have any suggestions for arranging testing conditions to improve the performance?
Here are my testing details;
2) I have some amount of in-plane rotations in my dataset. I had a similar situation with SuperPoint and I have tried the method that @Skydes suggests here.. But this did not work for D2-Net. I may be wrong but I think, when I rotate an image and try to match 640x480 image with 480x640, I also should rescale the rotated image due to the nature of the D2-Net model. Do you have any other suggestions to improve the performance for in-plane rotations?