Closed berndbischl closed 4 years ago
IDK, this is just confusing. Can we just keep it simple and tag functions to work with multiple proposed points, then just let the BB function decide how to evaluate the points? ML stuff could still just call benchmark()
, and more general functions you can always encapsulate and parallelize yourself (we can offer helpers for this though).
i dont really understand what you are saying, I think we need to discuss this orally
Yep let's do this tomorrow.
i mean: the evaluator is something that's constructed internally, right? so we can basically use a flag on construction, whether it uses parallizatiion as a service, or whether it assumes the function is parallelized itself. but note that the objective function now evaluates a SINGLE point. calling "benchmark" internally is not so easy anymore. i think we need to sort this out immediatly
solved with new architecture
not sure how to handle this:
the package should allow for the following: if multiple points are evaluated, this should be parallelized (by future) and encapsulated (by callr). now it seems reasonable to copy over / do something similar as in mlr3. NB: I have no problem with copy-pasting that code, thats not the issue here!
If I do that, and bbotk is used in mlr3tuning, we now have these features twice. That seems confusing to the user? Example: I could now switch on the parallel-option for bbotk, but I could also switch it on for mlr3. The same for encapsulation.
What would be the best way out here? @mllg