Closed SebGGruber closed 4 years ago
The problem here is that ps
already contains a $trafo
that exp-transforms a few of the other parameters. Replacing this $trafo
with a new function suddenly does not exp-transform these other parameters any more and they are sampled on the wrong scale (giving negative values to parameters that should be positive).
(This is an unfortunate design decision of the $trafo
in paradox
that I'm not entirely happy with myself, but we are stuck with that for now. I already complained about this here: https://github.com/mlr-org/paradox/issues/248)
The best solution I can see here is to "append" the new trafo somehow:
# need to save ps$trafo in an extra variable
old_trafo = ps$trafo
ps$trafo = function(x, param_set) {
x$xgboost.nrounds = 3^x$xgboost.nrounds
old_trafo(x, param_set)
}
Check ti$archive(unnest = "params")
to see that the parameter values as seen by the Learner
are what is desired.
Note the xgboost learner has xgboost.early_stopping_rounds
set to 10; you probably want to unset this if you don't want xgboost to stop early and instead influence the number of nrounds directly:
learner$param_set$values$xgboost.early_stopping_rounds = NULL
When adding a trafo to the parameter set, somehow undesired asserts make unnecessary checks which lead to an assert error. This makes log sampling currently not possible for random search. Steps to reproduce:
While removing the trafo gives us no assert error: