Open zoezhou1999 opened 4 years ago
By the way, could you provide more detailed information about the fine-tuning part? Thanks ahead.
Hi I used the ResNet-50 weights from this repo https://github.com/joe-siyuan-qiao/pytorch-classification/tree/e6355f829e85ac05a71b8889f4fff77b9ab95d0b
The finetuning we refer to is just dropping the learning rate and training for more epochs.
Hi, thank you for your reply. Does the "dropping the learning rate" mean to use a consistent LR lower than the LR of the final epoch and then train for more epochs or something else?
And the ResNet-50 weights mean ResNet-50 pretrained weights or the initializer in this GitHub repo of the ResNet part? Thank you. @MarcoForte
Hi, we use their ResNet-50 weights from pre-training on ImageNet, http://cs.jhu.edu/~syqiao/WeightStandardization/R-50-GN-WS.pth.tar
For dropping the learning rate here is the relevant text in the paper, "The initial learning rate is set at 1e-5 and then dropped to 1e-6 at 40 epochs and fine-tuned for 5 more epochs. "
and here is the pytorch code to do it https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.MultiStepLR
torch.optim.lr_scheduler.MultiStepLR(optimizer, [40], gamma=0.1)
Thank you so much! : )
Hi, I am reproducing your project. Sometimes, I found every time I trained, the converging start point is different. Did you have some specific initializer? Thank you so much~