What have you done to try and solve this issue?:
Added a global step variable, but unsure what its impact is.
TensorFlow version?:
1.10.1
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug or a feature request.
I wasn't entirely sure if this was addressed in the code already or not, so I just wanted to clarify a few things. When restoring a checkpoint and training from an epoch index that is non-zero, would the training parameters for the optimizer, including decayed learning rate and tensorflow's global_step also be restored correctly to the appropriate epoch that was left at from the checkpoint?
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug or a feature request.
I wasn't entirely sure if this was addressed in the code already or not, so I just wanted to clarify a few things. When restoring a checkpoint and training from an epoch index that is non-zero, would the training parameters for the optimizer, including decayed learning rate and tensorflow's
global_step
also be restored correctly to the appropriate epoch that was left at from the checkpoint?