automl / HPOlib

HPOlib is a hyperparameter optimization library. It provides a common interface to three state of the art hyperparameter optimization packages: SMAC, spearmint and hyperopt. This package is discontinued, please read the longer note in the info box below.
http://automl.org/hpolib
GNU General Public License v3.0
166 stars 56 forks source link

Support for `scikit-optimize` a.k.a `skopt` #114

Open MechCoder opened 8 years ago

MechCoder commented 8 years ago

Hello there,

We (@glouppe , @betatim and myself) have been working on https://github.com/scikit-optimize/scikit-optimize over the past few months and just made the first release. For the next release, we are mainly concentrating on getting a set of benchmark numbers ready, so that any new "improvement" does not regress performance (in terms of rate of convergence at least).

Our initial benchmarks on the branin and the hart6, show that we perform almost as well as the other state-of-the-art libraries with our default parameters. See: https://github.com/MechCoder/scikit-optimize/blob/cc789bf9131e612b6b11f764481caa7e15ec8a5f/benchmarks/RESULTS.md . For running the other benchmarks using the cross-validation strategies, we would have to add skopt as per (http://www.automl.org/manual.html#add_optimizer)

Curious to know if you would consider a Pull Request for adding support for skopt upstream?

Thanks!

mfeurer commented 7 years ago

Hi,

Thanks for your interest in the HPOlib.

We would be very happy to merge your PR into our repository. In general we recommend you to use the development branch, although the documentation is not up to date. Given that you are experienced python developers, I assume that you can modify any of the given optimizers (maybe TPE which is already written in python)? Otherwise, I hope that I'll be able to update the documentation soon so that we can do a final merge from the development branch into the master branch.

Nevertheless, we (@keggensperger, @frank-hutter and me) have planned to change the focus of the HPOlib. We have found that we cannot easily integrate more sophisticated Bayesian optimization methods as they require a different benchmark interface. Thus, we will focus on the benchmark side and collect a bunch of reasonable benchmarks for the community.

Best regards, Matthias

MechCoder commented 7 years ago

Hi,

Thanks for the response! I have been playing with your SurrogateBenchmarks library (https://github.com/KEggensperger/SurrogateBenchmarks/) and modifying it for skopt's needs. I feel things could be made slightly simpler, for instance there could be an interface to directly use the predict functionality of the pickled surrogate model and documentation could be added to specify directly how the parameters are supposed to be provided to the predict method instead of the slightly complicated socket programming interface.

I'm curious to know how are you planning to solve the problem of providing more benchmarks across different packages without having a common interface that integrates all software. Both problems seem related to me.

Also, not related but you guys should support Python3 :p

mfeurer commented 7 years ago

Hi,

Sorry for the late reply. HPOlib isn't high on our priority stack and has gotten too complex to maintain as a side project. Therefore, we have decided to discontinue it. Nevertheless, the old state should still be working. If you want to do a PR for your optimizer, we'd also be happy to merge it.

We have put our benchmarks into a new project HPOlib2, which will be used by our group, and hopefully be useful to other groups working on Bayesian optimization. It is python3 compatible and should work without much dependency hassle. If you have any suggestions on how to best deal with dependencies, please let us know (in the new repo or via email).

Best regards, Matthias