mihaelacr / pydeeplearn

Deep learning API with emotion recognition application
BSD 3-Clause "New" or "Revised" License
151 stars 68 forks source link

Early stopping in deepbelief.py #11

Closed Warvito closed 9 years ago

Warvito commented 9 years ago

Hi!

Why was it not used the constant "improvmentTreshold" at the trainModelPatience function of the deepbelief.py? This constant was used in the ann.py file to set the value of relative improvement considered significant. Why not use it in DBN too?

In addition, the report stated that the early stopping had a significant downsize "it requires checking the validation error after each mini batch, not only after each epoch". Why not use a constant "validation_frequency" in a similar way to that presented in "http://deeplearning.net/tutorial/gettingstarted.html"? This technique allows you to define when analyzing the validation set.

Bests

mihaelacr commented 9 years ago

I think your pull request solved the first issue you were talking about.

As for the second issue, sure you can define such a variable, but if you do not check often you might miss the sweet spot in which you need to stop.

mihaelacr commented 9 years ago

I believe this issue was solved by my latest PR.