Closed tupini07 closed 5 years ago
No special reason really. The value for patience was just determined experimentally. The model is a bit slow to improve the val_loss
after the first few epochs.
For example, see this graph of how the val_loss
varies when training the mlp
on imdb/writer
. The validation loss decreases quickly until epoch 50. Then it decreases very slowly. With the optimal loss being found at 161 . Without patience=100
it would have stopped much earlier.
Implemented
multi layer perceptron
classifier.A base class was created for our custom NN implementations. It already defines a
fit
method, whose functionality is common to both theslp
andmlp
.This was implemented as part of #286