torch / optim

A numeric optimization package for Torch.
Other
197 stars 154 forks source link

What is the correct way continue training after xth epoch? #129

Open euwern opened 8 years ago

euwern commented 8 years ago

I am currently using optim.adam to train my network. Let say, I am training my network up to xth epoch and I save my model, what settings in the optim function should I save in order to continue the training?

I notice that if I just load my save model, the loss computed is did not follow the trend. (the loss actually went back to loss computed in the first epoch). There must be some settings I need to reload in order to get back the similar loss.

The way I compare the result is by computing the loss at x + n epoch but I saved my model at xth epoch. After that I just reload my saved model at xth iteration and train to n epoch and compare the loss computed.

Technically speaking they should be similar. I hope someone can shed some light in this issue.

gulvarol commented 7 years ago

I am not sure, but could it be the 'state' variable that you might need to reload when continuing from a saved model?