Closed accosmin closed 8 years ago
There might be a bug in tuning the stochastic optimizers: the selected configuration does not correspond to the minimum training loss.
Fixed the bug related to tuning. Many more samples are used now for training, thus hyper-parameter tuning is much more robust.
Fixed all stochastic optimizers except AG-based ones.
Fixed AG-based optimizers as well.
Need to improve tuning their hyper-parameters because there is a BIG variance in the resulting train/test error.
Possible solutions: