rmaphoh / RETFound_MAE

RETFound - A foundation model for retinal image
Other
311 stars 63 forks source link

Increasing learning rate #8

Closed Hong7Cong closed 8 months ago

Hong7Cong commented 9 months ago

While fine-tune pretrained RETFound_cfp on our dataset, I noticed that the learning rate was increasing every batch. Why? Is there any practice behind this? Thanks

increaselr

My bash command was: python -m torch.distributed.launch --nproc_per_node=1 --master_port=48798 main_finetune.py --batch_size 4 --world_size 1 --model vit_large_patch16 --epochs 100 --lr 5e-3 --blr 5e-3 --layer_decay 0.65 --weight_decay 0.05 --drop_path 0.1 --nb_classes 2 --data_path ../../chla_fundus/ --task ./finetune_chla/ --finetune ./RETFound_cfp_weights.pth

rmaphoh commented 9 months ago

Hi Hong, thanks for your interest in RETFound. The learning rate is scheduled and updated in the line. This is a practical learning rate scheduler, using warming up (increasing) followed by cosine annealing (decreasing). The curve looks like below.

image
rmaphoh commented 8 months ago

This issue has been closed due to no further response.