Closed gngdb closed 9 years ago
The NLL of a neural network is the same as the log loss; that's definitely right. However, the score reported on validation sets as above differs from that calculated by our scripts; whether run on the validation or holdout set. Those match the leaderboard, so I don't know what's going wrong. Maybe they're using different logs?
A bit redundant to #43
We weren't sure the other day when looking at the monitoring logs from training a convnet, but y_objective and y_nll looked pretty similar. NLL should be equal the log loss, but then what is the objective. Trained the
alexnet_based.yaml
network with simple resizing 48x48 for preprocessing and got the following final monitoring logs:In this one, objective and nll diverged. Could be due to different cost function from the other network.
They say you should not be afraid of asking stupid questions. I hope they're right, because this is going to look pretty stupid in retrospect.