Open fangyw opened 8 years ago
@fangyw Yes, RNNSharp already supports such strategy, but we don't use it currently. If you want to enable this feature, you can check return value of "rnn.ValidateNet(ValidationSet, iter)". If it's false, it means we cannot get a better model on validated corpus, then we can update learning rate.
Hi, I am really impressed by your readable code. A simple question about the code which may not worth mention: I think it should stop the learning when the PPL on the development set are not increased any more or the increased error rate meets our requirement, rather than on the training set. If current ppl is larger than the previous one, we should increase the learning rate or make some other decisions.