Closed sherjilozair closed 9 years ago
Yeap, usually non-stop decay works better. To test it on your dataset you can add do_lr_decay = false;
at line 371 in rnnlm.cc:
https://github.com/yandex/faster-rnnlm/blob/master/faster-rnnlm/rnnlm.cc#L371
As the result learning rate would decay if and only if the last epoch was bad.
Thanks.
Is it the expected behavior to continuously decay learning rate every epoch after one bad epoch? Doesn't that exponentially decrease the learning rate? Wouldn't it be better to decay the learning rate only once, and then decay again only when validation still does not decrease?
In your experience, did you find the status quo better?