MarcoForte / FBA_Matting

Official repository for the paper F, B, Alpha Matting
MIT License
464 stars 95 forks source link

How the model be initialized before starting training? #23

Open zoezhou1999 opened 4 years ago

zoezhou1999 commented 4 years ago

Hi, I am reproducing your project. Sometimes, I found every time I trained, the converging start point is different. Did you have some specific initializer? Thank you so much~

zoezhou1999 commented 4 years ago

By the way, could you provide more detailed information about the fine-tuning part? Thanks ahead.

MarcoForte commented 4 years ago

Hi I used the ResNet-50 weights from this repo https://github.com/joe-siyuan-qiao/pytorch-classification/tree/e6355f829e85ac05a71b8889f4fff77b9ab95d0b

The finetuning we refer to is just dropping the learning rate and training for more epochs.

zoezhou1999 commented 4 years ago

Hi, thank you for your reply. Does the "dropping the learning rate" mean to use a consistent LR lower than the LR of the final epoch and then train for more epochs or something else?

zoezhou1999 commented 4 years ago

And the ResNet-50 weights mean ResNet-50 pretrained weights or the initializer in this GitHub repo of the ResNet part? Thank you. @MarcoForte

MarcoForte commented 4 years ago

Hi, we use their ResNet-50 weights from pre-training on ImageNet, http://cs.jhu.edu/~syqiao/WeightStandardization/R-50-GN-WS.pth.tar

For dropping the learning rate here is the relevant text in the paper, "The initial learning rate is set at 1e-5 and then dropped to 1e-6 at 40 epochs and fine-tuned for 5 more epochs. "

and here is the pytorch code to do it https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.MultiStepLR torch.optim.lr_scheduler.MultiStepLR(optimizer, [40], gamma=0.1)

zoezhou1999 commented 4 years ago

Thank you so much! : )