Closed 23pointsNorth closed 2 years ago
Hello @23pointsNorth,
thanks for the detailed explanation! (As you pointed out) it sounds like the solution from issue SimonBlanke/Hyperactive#42 is what you are searching for. I have not released this feature, yet. But you can clone the dev-branch and try it for your self, if you like.
If you are sure, that this fixes your problem you can let me know. I will add tests, merge it into master and release it in v4.1.0 :-)
Hi @SimonBlanke,
Few notes: I'm transitioning from 3.x, as such a few diffs that would make sense - (1) the submodule of the search strategies can be mentioned in the Roadmap section, (2) to get unique samples from the results, I usually had to drop eval_times | iter_times
from the DataFrame and then call .unique()
. Currently, the data frame returned doesn't have these columns, but are still in the documentation/Readme. Not sure if the error is on my side, or the data frame has now dropped returning those values.
Hello @23pointsNorth,
https://github.com/SimonBlanke/Hyperactive/issues/42 indeed helped with the issue. Any change within the optimization function to that dictionary, as I understand, would persist to the next call? Looking forward to getting it stable and in pypi :)
Very nice! Yes the changes to the dictionary should persist. I will add a test for this case.
Few notes
(1) I do not understand what you mean by that. The fact that the optimizers are now imported from hyperactive.optimizers? (2) You are correct! This slipped through and will be corrected. The eval-/iter-times are per default not in the search-data (makes more sense in my opinion). They will be accessible again in v4.1.0.
Thanks for the help and great suggestions! I will close this issue as soon as this feature is released.
Perfect!
The notes were mostly on the documentation side: (1) I.e. adding that those have moved. (2) It was just to highlight the mismatch with the readme. I agree with you that not including them by default makes more sense.
Hello @23pointsNorth,
the pass_through
-feature has been released. See SimonBlanke/Hyperactive#42 for more information.
Is your feature request related to a problem? Please describe. There are situations in which the optimization function is governed by different external parameters - e.g. if the optimization score is calculated as
s = alpha * x + beta
, where depending on the initial conditions, alpha and beta are different, it can become handy to be able to pass those as either values of the optimization function or of theopt
input variable.Describe alternatives you've considered Right now this can be done by creating a lambda function depending on the initial condition that wraps the hyperactive optimization call - e.g.
optim_func = lambda opt: true_optim_func(opt, alpha=1, beta=external_var)
, and changing the lambda dynamically. This, however, does now work if we want to usen_jobs!=1
asmp.Pool
cannot serialize the lambda function.Describe the solution you'd like
Additional context This would even allow extra functionality like optimizing symbolic functions that are externally referenced.
edit: assessing if this type of alternative works.