Closed litingsjj closed 2 years ago
Thanks a lot! I compared some images with superpoint and sift. For scale change, sift is much better than superpoint(or other cnn methods). the result is so terrible if the object in image1 is four times than in image2. I will read KeyNet and DISK that you provided. Also for descriptor matches, did you have some advices to improve? There are some paper used transformer, like superglue, LoFTR to get a better result, but the speed is so slow. I will so appreciate if you can solve my confusion, even if provide some clues. Thanks again!
Yes, learned descriptors have notoriously problems with scale changes, and SIFT and still very good for that. This is still an active area of research, so I can't help you much on that. The recent learned machers like SuperGlue and LOFTR are indeed very good, but slower. There will always be some trade-off, it depends on your application.
thanks for your reply!
Sorry to bother you. this is a great project! I have a question about hpatches-v result. do you have any plan or advices to improve hpatches-v result for superpoint?