magicleap / SuperGluePretrainedNetwork

SuperGlue: Learning Feature Matching with Graph Neural Networks (CVPR 2020, Oral)
Other
3.31k stars 672 forks source link

Extent of homographies for synthetic pre-training #75

Closed Valentyn1997 closed 3 years ago

Valentyn1997 commented 3 years ago

I am trying to reproduce synthetic pre-training via random homographies and photometric transformations. In Figure 13 in the paper, you show some examples, where random homographies are limited to those, which e.g. have only magnifying scaling. I wonder, what are the limitations on generating homographies, such as min percentage of occluded keypoints?

sarlinpe commented 3 years ago

Sorry for the late reply. I am not sure to understand what you mean by "limitations". The homographies that we sample, including those shown in Fig 13, are not limited to scale changes. They have large perspective transformations and some rotation. We sampled them based on hand-tuned boundary values but did not enforce a minimum number of covisible points.

Valentyn1997 commented 3 years ago

Ok, got it. Thanks!