Closed Youyi7 closed 4 years ago
I also want to produce these pictures, since I couldn't find correct correspondence one by one . I guess these picture can easily get by Extrinsics.
The pipeline we used for plotting the qualitative examples is the following: extract D2-Net features for the two images, run mutual NN matching, and then plot the matches that pass geometric verification. For the pictures in the teaser (the ones above), we used a simple homography fitting.
Sadly I don't have access anymore to the code I used for this, but I'll rewrite it and add it to the repository next week.
The pipeline we used for plotting the qualitative examples is the following: extract D2-Net features for the two images, run mutual NN matching, and then plot the matches that pass geometric verification. For the pictures in the teaser (the ones above), we used a simple homography fitting.
Sadly I don't have access anymore to the code I used for this, but I'll rewrite it and add it to the repository next week.
Perfect, thanks a lot
I am also very interested in this comparison, so really looking forward to your code!
Great job on the paper btw :)
I rewrote the script for generating the qualitative matches in Python and added it to the repository - the plots are not identical to the ones from the paper but they are still quite good in my opinion! You can check the script out at https://github.com/mihaidusmanu/d2-net/tree/master/qualitative
Feel free to let me know if you run into any issues.
Is there a way to produce at least some visual output just like the paper shows
Or perhaps you could tell a quick hack for us which can enable some visualization? I mean, since you captured the images in the paper, you already have the module somehow in the code, it is just not used I guess.
P.S: The work looks great, and we would like to actually really invest our time in utilizing it. If you can help people in this regard, I believe you can get a lot of citations in the close future :)