Just FYI, the configs are different compared to the paper:
Paper: lr 1e-5, by 0.95 in each epoch after 10 epochs.
Training details: Weights are initialized from the pre- trained model on homographies, Training starts with a learning rate of 1e-5 and we exponentially decay it by 0.95 in each epoch after 10 epochs, and stop training after 50 epochs (2 days on 2 RTX 3090). The top 2048 keypoints are extracted per image, and we use a batch size of 32. To speed-up train- ing, we cache detections and descriptors per image, requiring around 200 GB of disk space.
Hi,
Just FYI, the configs are different compared to the paper:
Paper: lr 1e-5, by 0.95 in each epoch after 10 epochs.
Config: lr 1e-4, start decay after 30th epoch
https://github.com/cvg/glue-factory/blob/main/gluefactory/configs/superpoint%2Blightglue_megadepth.yaml#L43C1-L53C23