When I have a sparse search space, sometimes the search hits the exact same parameters multiple times (even when my max_evals is less than the number of possible parameter settings). I wonder where the best place to add logic to skip such redundant evaluations would be.
I'm considering adding an init parameter to memoize the loss_fn, which would speed it up when it revisits a point. This would "waste" evals, so maybe it should skip it in a way that doesn't add an entry to the trials object (and thus end the search earlier), but then we would end up in an infinite loop in the case where the search space is actually smaller than max_evals.
Coming back to this, I think the solution to this problem is to just not use hpsklearn when there's a small, discrete search space. I'll close my own issue.
When I have a sparse search space, sometimes the search hits the exact same parameters multiple times (even when my max_evals is less than the number of possible parameter settings). I wonder where the best place to add logic to skip such redundant evaluations would be.
I'm considering adding an init parameter to memoize the
loss_fn
, which would speed it up when it revisits a point. This would "waste" evals, so maybe it should skip it in a way that doesn't add an entry to the trials object (and thus end the search earlier), but then we would end up in an infinite loop in the case where the search space is actually smaller thanmax_evals
.