mme / vergeml

Machine Learning Environment - alpha version
MIT License
338 stars 13 forks source link

Incremental training #10

Open ditomax opened 5 years ago

ditomax commented 5 years ago

Hi,

first of all, thanks for this great project. I plan to use it in eduction to show how simple ML can be.

For experimenting with training parameters, it would be very cool to have an option to incrementally train a model, e.g. train some epochs, check and debug and then continue training more epochs. Maybe this is working already and I just did not find the option (looked in ml help train). Are there parameters to control incremental training?

Br, Dietmar

mme commented 5 years ago

You are welcome!

This feature did exist in a previous version, but I was not sure if it was needed so it got removed...

One thing you can do to achieve something similar is to use early stopping - take a look at the --early-stopping-delta and --early-stopping-patience flags. For example you could train like this:

ml train --early-stopping-delta=0.05 --early-stopping-patience=10 epochs=100

Which translates to "train for 100 epochs, but if accuracy does not improve by at least 0.05 over 10 epochs, stop early and save the resulting model".

There are a lot of things in VergeML I did not document yet, like preprocessing and different datasets.

If you need further help setting up the material for your students, feel free to get in touch.