ducha-aiki / affnet

Code and weights for local feature affine shape estimation paper "Repeatability Is Not Enough: Learning Discriminative Affine Regions via Discriminability"
MIT License
266 stars 47 forks source link

About the AFFNET #35

Open Bonnie-gift opened 1 year ago

Bonnie-gift commented 1 year ago

Hi, I am a little confused about the AFFNET. Is the output of the AFFNET the affine transformation predicted? Since I look through the code and find that the input for the AFFNET is only one image. How can the AFFNET predict the affine transformation with one input image.

ducha-aiki commented 1 year ago

The output of AffNet is the "canonical shape" of the local feature (think of SIFT), not the transformation between images. Please, check the AffNet paper (or any other local feature detection paper such as Hessian-Affine for more information)

ducha-aiki commented 1 year ago

https://kornia-tutorials.readthedocs.io/en/latest/_nbs/image_matching_adalam.html Check this image from AffNet example. The ellipses in each image are what predicted in AffNet. Then we match them based on local descriptor, thus establishing the correspondence. 3691935b2cfa96726e3bda57aae11d144a4788ad21e8334bff7070d13db28b48