Neuroglycerin / neukrill-net-work

NDSB competition repository for scripting, note taking and writing submissions.
MIT License
0 stars 0 forks source link

Refine the learning rate schedule #46

Open alinaselega opened 9 years ago

alinaselega commented 9 years ago

Make the learning rate decay slower as it might be currently getting too small.

alinaselega commented 9 years ago

Testing a model based on alexnet_based.yaml with increased epoch at which learning rate (and momentum) saturate and smaller scaling factors for the learning rate. Also monitoring valid_y_nll.

alinaselega commented 9 years ago

valid_y_nll was looking pretty low after only a few epochs but the model scored slightly worse on the holdout set than the original alexnet_based. After letting it run for longer, the score actually worsened.

alinaselega commented 9 years ago

Now trying two models, both with saturating epoch = 100 (4 times larger than the original) and original scaling factors for learning rate (0.9 and 1.1). One model still monitors valid_y_nll and the second monitors valid_objective again. Pending results.

alinaselega commented 9 years ago

The model that was monitoring valid_objective did a little better. Currently running a variation of the current best model (alexnet with extra convolutional layer and 8-factor augmentation) with the epoch of learning rate and momentum saturation set to 100.

alinaselega commented 9 years ago

The model didn't do better than the current best after 100 epochs.