lmjohns3 / theanets

Neural network toolkit for Python
http://theanets.rtfd.org
MIT License
328 stars 73 forks source link

Feature request: ongoing training error #26

Closed AminSuzani closed 10 years ago

AminSuzani commented 10 years ago

Hi,

Thanks for this fantastic library. I would like to request to provide ongoing training error, so that I will be able to write loops that tries different parameters and pick the ones which yield to better convergence.

Cheers, Amin

lmjohns3 commented 10 years ago

I've just added a change to start addressing this issue; after the change, the Experiment#train method will yield a dictionary after each training iteration, e.g.:

for costs in e.train():
    # do something with costs dictionary

This dictionary contains mean values for each of the costs defined by the network model over all of the data in a minibatch. The element of the dictionary that is being minimized by the training process is available under the "J" key ("J" is a common name for a generic loss function).

Depending on the training method and the network model being trained, other costs will also be available in this dictionary: for example, if you use SGD training on a Classifier model, then there will also be a "acc" key in the cost dictionary that provides the (percentage) classification accuracy on the training data for that minibatch.

Some trainers don't provide a cost metric that makes sense (like the Sample trainer); these always yield a cost dictionary equal to {'J': -1}.

If you have any questions or suggestions, or would like to submit a pull request to improve this process further, please go for it!

lmjohns3 commented 10 years ago

I'm going to go ahead and close this, please re-open if you think the feature needs some other solution.