Closed Warvito closed 9 years ago
I think your pull request solved the first issue you were talking about.
As for the second issue, sure you can define such a variable, but if you do not check often you might miss the sweet spot in which you need to stop.
I believe this issue was solved by my latest PR.
Hi!
Why was it not used the constant "improvmentTreshold" at the trainModelPatience function of the deepbelief.py? This constant was used in the ann.py file to set the value of relative improvement considered significant. Why not use it in DBN too?
In addition, the report stated that the early stopping had a significant downsize "it requires checking the validation error after each mini batch, not only after each epoch". Why not use a constant "validation_frequency" in a similar way to that presented in "http://deeplearning.net/tutorial/gettingstarted.html"? This technique allows you to define when analyzing the validation set.
Bests