Closed pfistfl closed 4 years ago
Out of curiosity: is nloptr really so much better suited for threshold tuning than optim()
?
I do not know how this is for threshold tuning, empirically it at least seems to hold for ensemble weights (page 47).
What is the status here? I think you wanted to look at this @be-marc @jakob-r or is any action from my side required?
I guess either @be-marc or I have to adapt that to all the changes after #248 is merged.
@pfistfl Sorry I missed that you already create a PR. I used the code you posted here for the implementation. The only difference is that I sticked more to the package defaults. So x0
is not create with a random design and algorithm
is also a required parameter. You can find the code in bbotk::OptimizerNLoptr
and mlr3tuning::TunerNLoptr
. If you want to improve something, open a new issue or PR.
As I want to use it for threshold tuning I added nloptr. It basically allows for nonlinear optmization + equality / inequality constraints. Currently we can only use parts of nloptr as we usually have no gradient information available, I guess in order to extend to this, we should decide how this would look like.
Currently the
"TunerNloptr with int params and trafo"
unit tests fail, but this does not seem to be on my side.