NLeSC / mcfly

A deep learning tool for time series classification and regression
Apache License 2.0
363 stars 82 forks source link

investigate hyperparameter optimization suggestions by Jurriaan and Berend and Carlos #34

Closed vincentvanhees closed 8 years ago

dafnevk commented 8 years ago

I also found this blogpost very useful: http://blog.turi.com/how-to-evaluate-machine-learning-models-part-4-hyperparameter-tuning

dafnevk commented 8 years ago

And we can look into Optunity they also support for example TPE and other optimizers.

dafnevk commented 8 years ago

Ah sorry but I see that Optunity uses hyperopt under the hood for TPE so we might run into the same problems as hyperas (#35)

vincentvanhees commented 8 years ago

Optunity allows for CMA-ES optimizer. According to 'Algorithms for Hyper-parameter Optimizations' by James Bergstra, " CMA-ES is a state-of-the-art gradient-free evolutionary algorithm for optimization on continuous domains, which has been shown to outperform the Gaussian search EDA. Notice that such a gradient-free approach allows non-differentiable kernels for the GP regression."

I struggle to digest this. Does this mean that it can handle non-real numbers as hyperparameter, like we want or is a non-differentiable kernel something different?

vincentvanhees commented 8 years ago

Rescale is a commercial tool to train deep networks in the cloud, including Keras, Torch,... Part of the service is Keras hyperparameter optimization. https://blog.rescale.com/deep-neural-network-hyper-parameter-optimization/ It may be good to know that these services exist.

dafnevk commented 8 years ago

In that blogpost, they use SMAC - which trains random forests on the results, and is better on categorical variables according to the blog of Alice Zheng. SMAC is available in python in the pysmac package

dafnevk commented 8 years ago

Another interesting blogpost: http://www.argmin.net/2016/06/20/hypertuning/ (also the comments below) Conclusion is that bayesian methods such as TPE and SMAC are only somewhat faster in finding an optimum than random search, the speedup is not more than 2x - and random search is easily parallelizable.

It seems that TPE and SMAC are the only algorithms that are really suitable for the type of problem that we have: with mixed categorical, discrete and continuous hyperparameters. This paper compares the methods. SMAC seems to be better than TPE in a majority of the medium/high-dimensional cases.