Closed shaoyn0817 closed 6 years ago
Hello, Yes the decay setup is completely wrong. It should be the number of steps which is actually around num_epochs*num_batches_per_epoch. That's why it is around 10000 for most other code cases. I'll be fixing this later this week, along with other bugs and updates for the code.
great thanks~
HI, In the train.py, there is a line of code lr_decay_fn = lambda lr, global_step : tf.train.exponential_decay(lr, global_step, 100, 0.95, staircase=True) May I ask why the decay step set to be 100? I saw other code sometimes set to around 10000. How to decide this paramter?