cvg / glue-factory

Training library for local feature detection and matching
Apache License 2.0
770 stars 99 forks source link

How to train lightglue with my extractor? #71

Open Adolfhill opened 8 months ago

Adolfhill commented 8 months ago

Thanks for your impressive work! I am trying to train lightglue with my key points extractor, but it seems that there are something with the extractor. When I use my extractor without lightglue, it extract key points like the photo below(the blue points): image

But when I trying to train lightglue, the key points looks like the photo below(the blue points): image

Is there any docs about training lightglue with new extractor? Or could you please give some help? Thanks!

Phil26AT commented 7 months ago

HI @Adolfhill, thank you for opening your issue. There seems to be a misalignment. How did you extract the keypoints? It seems they are detected on the non-transformed images, and you need to apply the homography transformation to your keypoints.

Adolfhill commented 7 months ago

Hello @Phil26AT, thank you for your assistance. What I have done is create a class for extracting keypoints, which inherits from gluefactory.models.base_model.BaseModel (and, of course, created a new configuration file).

Based on my understanding from reading gluefactory/models/two_view_pipeline.py, the process for generating keypoints is as follows:

  1. Read an image (Line 284 in _Dataset.getitem in gluefactory/datasets/homographies.py).
  2. Apply two random homography transformations to the image (Lines 296 and 297 in _Dataset.getitem in gluefactory/datasets/homographies.py).
  3. Extract keypoints from the transformed image (Line 69 in gluefactory/models/two_view_pipeline.py).

Did I understand the process correctly? If the aforementioned pipeline is accurate, it appears that the keypoints are extracted from the transformed image, implying that I do not need to apply any homographic transformation if my goal is to mark keypoints on the transformed image.