Closed RyanRTJJ closed 5 years ago
@RyanRTJJ you can try this:
"lr_scheduler_type": "StepLR",
"lr_scheduler_freq": 100,
"lr_scheduler": {
"gamma": 0.9,
"step_size": 100
},
initialize learning rate i suggest 1e-4,decay 1e-5
Okay, thank you. I'll give it a try some time.
@RyanRTJJ can u share me your recent attempts?
Hi @novioleo,
My training seems to be going well. I can see the model performing better and better (look at this - one of the better examples @ 100 epochs):
But my learning rate is dropping quite fast. This is my config:
I was expecting the learning rate to be 0.0001 - 1e-05 = 9e-05 after 50 epochs, but by 100 epochs, it had gone down to about 2.1e-07. Did I understand something wrong somewhere?
Is there a way to resume from a checkpoint.pth.tar but with a different learning rate and decay?
What's your recommended learning rate and decay? My train set is 12,000 + images, validation set is 2000+ images.