claesenm / optunity

optimization routines for hyperparameter tuning
http://www.optunity.net
Other
414 stars 79 forks source link

Mailing List, Forum, somewhere to ask questions? #39

Open gagabla opened 9 years ago

gagabla commented 9 years ago

Having some questions about the usage of Optunity, i was not able to find any mailing list or discussion board. Am i missing something?

claesenm commented 9 years ago

At the moment we have no mailing list or forum. If you have specific questions, feel free to open an issue as you've done here or send mails to us directly. You can find a lot of user information and examples at http://optunity.readthedocs.org/en/latest/.

gagabla commented 9 years ago

Thanks for your fast reply! So i will post my questions here since, they come up before actually trying optunity, its more about directions, not detailed discussion. If a topic might make sense on it own, i will start an issue appart.

My questions come up during preparation for a hyperparameter optimization of a deep convolutional network:

claesenm commented 9 years ago

Our current solvers are all continuous, but you can get (more or less) what you want by rounding the result.

This is currently not supported in Optunity, though it is high on our to-do list.

You can get this effect by applying the exponential function afterwards. That said, our experiments indicate that our solvers are fairly robust against scale, e.g. if you use a linear scale where a logarithmic one is most appropriate you will still get good results (just slightly slower).

Most of Optunity's solvers are parallel by default (PSO, CMA-ES, random search & grid search). The solvers output vectors of tuples to test, which you can then parallelize in whichever way you see fit. You can enable parallelization by specifying the pmap argument in optimize, minimize or maximize as described here. To see an example of how to implement your own version of pmap you can refer to the source of our own pmap implementation (which vectorizes using Python threads), but I guess this is quite straightforward.

Yes, just return a bad value and all directed solvers will start looking somewhere else. This is also how we handle domain constraints internally as some of our solvers are unconstrained by nature.

This is currently not supported though this is a very interesting idea. We will certainly consider extending our functionality to allow such use-cases.

gagabla commented 9 years ago

Thank you for your responses! In the meantime i stumbled upon hyperopt, they seem to have implemented my first three questions. But its not clear what solvers they have implemented (documentation differs from code), and evaluating my other questions, i have big trouble to wrap my mind around their code. So i will try to get this running with optunity, you will propably hear from me sooner or later :-) Thanks for sharing your work!

claesenm commented 9 years ago

Optunity now features strategy choices as well. Check out http://optunity.readthedocs.org/en/latest/notebooks/notebooks/sklearn-automated-classification.html for an example!

gagabla commented 9 years ago

Wow, this looks great!

I am still trying to figure out a way to reduce the size of my problem, since one evaluation of the objective function takes 3 ... 4 days. My current approach to minimize the search space by reducing the number of hyperparameters/their legal range has shown to loose the representational power for many interesting cases (which makes the optimization result useless).

amir-abdi commented 7 years ago

How is the "strategy choices feature" implemented? For example in terms of PSO, the particle positions are supposed to be updated based on a certain formula. Categorical features (strategy choices) cannot be integrated into this formula as one cannot define the "distance" between two categories. So I'm wondering how you are dealing with these scenarios?