Closed henrifnk closed 4 years ago
You cannot access the archive because the $model
field in AutoTuner
is empty when using a graphlearner. at$archive()
just returns the stored archive in $model
. You can access the archive with gl_at$model$regr.rpart.tuned$model$tuning_instance$archive()
.
What you describe seems to be normal since the graphlearner never writes something to the $model
field of the original learner.
> learner = lrn("regr.rpart")
> gl = GraphLearner$new(learner,task_type = "regr")
> gl$train(tsk("mtcars"))
> learner$model
NULL
errm that at the very least seems confusing? but it seems like a violation of a contract somewhere?
i am pretty sure that this setup that henri posted shoukd work and that we should test against it
GraphLearner creates a clone of whatever it encapsulates, so the original at
is not changed when the pipeline gets tuned. This is consistent with our behaviour in many other places where we tend to create clones when we encapsulate something to avoid bad surprises.
@henrifnk i think i also saw an conceptual error in your posted example. why are you wrapping an AutoTuner with a GraphLearner? You would do this, right? Create a Graph. Wrapt it with a GraphLearner. Wrap it with an AT?
I THOUGHT the issue would be quite fundamental, and that you wanted to report that this wouldn't work.
ps = ParamSet$new(list(
ParamInt$new("classif.rpart.minsplit", lower = 1, upper = 10)
))
pl = po("removeconstants") %>>% lrn("classif.rpart")
gl = GraphLearner$new(pl)
at = AutoTuner$new(gl,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
search_space = ps,
terminator = term("evals", n_evals = 1L),
tuner = tnr("grid_search", resolution = 1L)
)
at$train(tsk("iris"))
but it does work.
Can you please give some context WHY you are wrapping the objects above as you do?
My intetion was simply to have a static part and a tunable part of the learner. Since both, Graphlearner and Autotuner are "learners" i thought it might not make a difference in which order to connect them.
Still the way you set up your example works for me and solves the problem.
The only remaining way this could cause trouble, that i can guess, is if you would want to have mutiple autotuners in a graph for some reason...
This seems to be solved, afaics the code behaves it is expected here.
If I create an autotuner, wrap it in a graphlearner and train the graphlearner , i am not able to access the autotuners archive in the usual manner by
at$archive()
....A simple example to reconstruct the issue:
After training
at$archive()
throwsError in at$archive() : attempt to apply non-function
andsame applies more or less to piped autotuners...
only in this case archive is somehow accessable through