Closed TwiceMao closed 6 months ago
1 is right, and I don't know what you mean in 2.
Ok,Let me explain more explicitly. For example, a pair of matching results are left uv coordinates (1.0, 5.0) and right uv coordinates (14.786, 45.675). Then when you use this matching result (such as in the downstream task structure from motion), will you round the right uv coordinate (14.786, 45.675)? If it is rounded down, it will be (14, 45). If so, what rounding method do you use?
Oh, I see it. In the visual localization problem, we will use more coarse coordinate, specifically, our approach is an intermediate approach between loftr and patch2pix, which will perform a weighted average of all the points involved in every 8*8 grid.
We don't simply round the coordinates
Sorry, what did you mean by we will use more coarse coordinate? Does it mean that the matching coordinates corresponding to the picture on the right are all integers? The visual localization problem requires accurate feature matching. For the same SFM method, the more accurate the feature matching, the more accurate the output pose and point cloud will be. If you use coarse matching, is it not conducive to visual localization? Thanks very much~ Ok,Could you please explain how to handle the matching coordinates on the right? Thanks again~
Our approach is the variant of https://github.com/GrumpyZhou/image-matching-toolbox, you can follow it to check how it works
OK,thanks
Hi @xuanlanxingkongxia
Is this normal? About matching results, the coordinates of the left image are typically integers. The coordinates of the right image are floating-point values.
Do you actually round the UV coordinates of the right matching results when using them? If yes, how do you round them?