lmjohns3 / theanets

Neural network toolkit for Python
http://theanets.rtfd.org
MIT License
328 stars 73 forks source link

How to change the logging setting for SGD and layerwise trainer? #27

Closed AminSuzani closed 10 years ago

AminSuzani commented 10 years ago

Hi,

I was wondering if there is a way to control the logging info for layerwise and SGD optimization? For example, I like to see the training error only every other 50 updates (not on each update). My training takes a couple days. Whenever I get to my computer I see the logging for the last hour at most, and I won't get a feeling of what's going on.

Thanks for your great package, Amin

kastnerkyle commented 10 years ago

If you are using Linux (maybe Mac too?), try

python -u myfile.py 2>&1 | tee log.log

This should log all data to a file called log.log, while still letting you see the last hour or so in the terminal. The -u is to run python in unbuffered mode, which is what allows tee to work properly. While logging thresholds may be nice, being able to see all execution is still important!

On Fri, Jun 27, 2014 at 3:56 PM, Amin Suzani notifications@github.com wrote:

Hi,

I was wondering if there is a way to control the logging info for layerwise and SGD optimization? For example, I like to see the training error only every other 50 updates (not on each update). My training takes a couple days. Whenever I get to my computer I see the logging for the last hour at most, and I won't get a feeling of what's going on.

Thanks for your great package, Amin

— Reply to this email directly or view it on GitHub https://github.com/lmjohns3/theano-nets/issues/27.

lmjohns3 commented 10 years ago

I agree with @kastnerkyle that it's useful to have a log with lots of information in it, and the tee strategy is one that I use personally (then later you can grep through the on-disk log file and make quick learning curve graphs, etc.).

However, you can also configure the number of training batches that get processed per log line, by using the --train-batches (command-line) or train_batches (programmatic) arguments when running your model. Set this to a large number to see less-frequent updates.

AminSuzani commented 10 years ago

Thanks Kyle and Leif for your answers. using tee was a really good idea. Increasing train batches made the training faster, but led to less accuracy.