Open Adolfhill opened 8 months ago
HI @Adolfhill, thank you for opening your issue. There seems to be a misalignment. How did you extract the keypoints? It seems they are detected on the non-transformed images, and you need to apply the homography transformation to your keypoints.
Hello @Phil26AT, thank you for your assistance. What I have done is create a class for extracting keypoints, which inherits from gluefactory.models.base_model.BaseModel (and, of course, created a new configuration file).
Based on my understanding from reading gluefactory/models/two_view_pipeline.py, the process for generating keypoints is as follows:
Did I understand the process correctly? If the aforementioned pipeline is accurate, it appears that the keypoints are extracted from the transformed image, implying that I do not need to apply any homographic transformation if my goal is to mark keypoints on the transformed image.
Thanks for your impressive work! I am trying to train lightglue with my key points extractor, but it seems that there are something with the extractor. When I use my extractor without lightglue, it extract key points like the photo below(the blue points):
But when I trying to train lightglue, the key points looks like the photo below(the blue points):
Is there any docs about training lightglue with new extractor? Or could you please give some help? Thanks!