cvg / glue-factory

Training library for local feature detection and matching
Apache License 2.0
722 stars 90 forks source link

Discrepancy in training config w.r.t. paper #43

Open ducha-aiki opened 10 months ago

ducha-aiki commented 10 months ago

Hi,

Just FYI, the configs are different compared to the paper:

Paper: lr 1e-5, by 0.95 in each epoch after 10 epochs.

Training details: Weights are initialized from the pre- trained model on homographies, Training starts with a learning rate of 1e-5 and we exponentially decay it by 0.95 in each epoch after 10 epochs, and stop training after 50 epochs (2 days on 2 RTX 3090). The top 2048 keypoints are extracted per image, and we use a batch size of 32. To speed-up train- ing, we cache detections and descriptors per image, requiring around 200 GB of disk space.

Config: lr 1e-4, start decay after 30th epoch

https://github.com/cvg/glue-factory/blob/main/gluefactory/configs/superpoint%2Blightglue_megadepth.yaml#L43C1-L53C23

train:
    seed: 0
    epochs: 50
    log_every_iter: 100
    eval_every_iter: 1000
    lr: 1e-4
    lr_schedule:
        start: 30
        type: exp
        on_epoch: true
        exp_div_10: 10