Open AubreyCH opened 5 months ago
Hello @AubreyCH , I am currently trying to understand the process of training LightGlue, but I encountered a problem during the fine-tuning stage because I don't have enough storage space to store the downloaded MegaDepth dataset. Could you kindly help me describe briefly about the structure and content stored in the MegaDepth dataset used? I would greatly appreciate your feedback.
Hi! Thank you for your excellent work! I've been trying to reproduce the results reported in the paper recently. Here's what I got:
By using 2 4090s and following the official config, the results I got from pretraining on the homography dataset:![09fa20b94f85008974ca40bed1057ac0](https://github.com/cvg/LightGlue/assets/52943195/4ef82765-4882-41ee-a991-e96dc6d04fe7)
Finetune on Megadepth Then I followed settings described in the paper : lr as 1e-5, decay by 0.8 after 10 epochs, and I got the checkpoint_best with loss: Epoch 48 :New best val: loss/total=0.4931303240458171 the test results are as follows:![4f10be29187de7e8a9432651e6ba6b16](https://github.com/cvg/LightGlue/assets/52943195/fedc5801-9b22-4a93-84d7-fc4cdd625923)
I also tried the settings in the official config: lr as 1e-4 , decay by 0.95 after 30 epochs, this is what i got:
Epoch 49: New best val: loss/total=0.3537058917681376
And I've tried some other possible settings and still didn't reach the same results reported in the paper. Could you plz give more details on how you finetune the model on the megadepth dataset? Or any other suggestions on improving the performance?