hyperopt / hyperopt-sklearn

Hyper-parameter optimization for sklearn
hyperopt.github.io/hyperopt-sklearn
Other
1.59k stars 272 forks source link

Multiple evaluations of same parameters #116

Closed adodge closed 5 years ago

adodge commented 6 years ago

When I have a sparse search space, sometimes the search hits the exact same parameters multiple times (even when my max_evals is less than the number of possible parameter settings). I wonder where the best place to add logic to skip such redundant evaluations would be.

I'm considering adding an init parameter to memoize the loss_fn, which would speed it up when it revisits a point. This would "waste" evals, so maybe it should skip it in a way that doesn't add an entry to the trials object (and thus end the search earlier), but then we would end up in an infinite loop in the case where the search space is actually smaller than max_evals.

adodge commented 5 years ago

Coming back to this, I think the solution to this problem is to just not use hpsklearn when there's a small, discrete search space. I'll close my own issue.