Britefury / self-ensemble-visual-domain-adapt-photo

Code repository for the VisDA-17 experiments in our paper 'Self-ensembling for Domain Adaptation'
MIT License
73 stars 14 forks source link

something about lr_rate #5

Closed SmartPorridge closed 5 years ago

SmartPorridge commented 5 years ago

Hi, If I have a pretrained model, the lr should be 1e-4*0.1 = 1e-5, and then the lr keeps 1e-5 till the end of the training?

Thank you.

Britefury commented 5 years ago

Following the shell script run_visda17_trainval_resnet152_minaug.sh:

The learning rate is 1e-5. Layers from the pre-trained model are trained using an LR of 1e-5 * 0.1 = 1e-6.

SmartPorridge commented 5 years ago

@Britefury

  1. Yeah, you are right, it's 1e-6, and it keeps 1e-6 till the end?

  2. I modefied your code to video action recognition task, but I got a terrible result. Could you give some advice on which parameters I need to change in the shell script for video action recognition task?

Thank you for your great work. Best Regards!

Britefury commented 5 years ago
  1. The learning rate is held constant throughout training, yes.

  2. I don't know how to best modify it for video recognition! It may be not at all well suited to this task.

Best of luck.