Closed andrefaraujo closed 3 years ago
Hi Andre!
Thanks for your message! We used "delg_gld_20200520", which reports RO(m) 69.7 according to the snapshot of the repo and the arxiv paper at time of our experiments. This is RN50, right? I just realized that the repo has been updated since then. Indeed, there is a typo in footnote 10. Thanks for catching this! I will correct that.
No, unfortunately, we did not try any other model yet, but we plan doing this soon since DELG performs really well on visual localization.
Best, Martin
Thanks for the quick reply, Martin!
Indeed, the model you used would be the RN50-based one -- we had only released the RN50 initially.
The new models, with RN101 and also trained on GLDv2, can be found here, and their performances are listed in the DELG paper (GLDv2-trained results are in the appendix). Note that we did some small reformatting of the codebase since then, so you may want to follow the updated instructions to make sure it works.
Good to hear that you are planning to run more experiments, looking forward to the results :)
Hello,
I just saw your paper "Benchmarking Image Retrieval for Visual Localization". Thanks for the nice study, the results are very interesting. I have a couple of questions regarding the DELG models which were used.
1) In the paper, I saw that you mention using the RN101 DELG model that was trained in GLDv1 (as per footnote 10). However, Tab1 mentions the retrieval mAP numbers for DELG's RN50 backbone. Eg, RO(m) for the RN101 version is 73.2 instead of 69.7. I am wondering if this was a typo, or if you were accidentally using the RN50 version. 2) Have you attempted using the DELG model variants which were trained on GLDv2? They are available in our repository. I am just curious if training on this dataset could improve performance for your application.
Thanks, and again very nice work!
Andre