Closed adamzenith closed 6 months ago
Hello @adamzenith, thank you for the interest in our work!
I believe our architecture is fully compatible with their training pipeline. With minor adjustments, it should be possible to train XFeat with different image modalities as described in XoFTR. However, as we distill keypoints from ALIKE, it is recommended to use self-supervised loss for the keypoints (such as ALIKE itself or SuperPoint), or distill from ALIKE in RGB and enforce repeatability on thermal images for example, to get better results in the sparse setting. Stay tuned for the release of our training code!
Closing the issue. If you need further clarifications, feel free to re-open it!
Great work on this project, it looks very promising! What are your thought on using this architecture to find keypoints and descriptors across modalities? This paper attempts to match features across rgb and infrared images. Do you see any reasons why this might not work for your architecture?