Closed be-marc closed 4 years ago
Merging #217 into master will increase coverage by
0.21%
. The diff coverage is60%
.
@@ Coverage Diff @@
## master #217 +/- ##
==========================================
+ Coverage 91.83% 92.04% +0.21%
==========================================
Files 18 18
Lines 306 352 +46
==========================================
+ Hits 281 324 +43
- Misses 25 28 +3
Impacted Files | Coverage Δ | |
---|---|---|
R/TuningInstance.R | 91.07% <60%> (+0.06%) |
:arrow_up: |
R/Tuner.R | 85.36% <0%> (+0.07%) |
:arrow_up: |
R/AutoTuner.R | 98.5% <0%> (+0.46%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 706f1dc...fd3ba0b. Read the comment docs.
since no results were written to private$.result by a Tuner class. Should we omit the result output in this case?
imo yes, looks weird otherwise
The indention also looks a bit off here. Is this how its supposed to be?
We check now if TuningInstance
was called by a Tuning*
object to determine if a tuning was conducted. We do this indirectly by checking !is.null(self$result$perf)
instead of using self$n_evals != 0
since n_evals
is also increased if just $eval_batch()
is called without using a tuner. self$result$perf
is always written by a Tuning*
object .
@pat-s Yes the indention is on purpose. Do you prefer another way to structure the output?
How about:
$Result
being a list
perf
: data.table with cols id
and value
tune_x
: data.table cols id
and value
Do we really need $params
? Its just showing the same as tune_x
for tuned params and also not tuned params? Maybe just append not tuned ones in the tune_x
output?
After you've made changes, please also post the example output in a comment (maybe a small reprex helps) and do not edit the first comment - otherwise some comments cannot be related to certain intermediate works :)
Do we really need $params? Its just showing the same as tune_x for tuned params and also not tuned params? Maybe just append not tuned ones in the tune_x output?
tune_x
shows the hyperparameters without trafo and params
with trafo applied.
Ah ok. Then I'd say only params
? tune_x
is in the object anyway but for the printout params
should be sufficient then?
New print ouput:
<TuningInstance>
* State: Tuned
* Task: <TaskClassif:iris>
* Learner: <LearnerClassifRpart:classif.rpart>
* Measures: classif.ce, classif.acc
* Resampling: <ResamplingHoldout>
* Terminator: <TerminatorEvals>
* bm_args: list()
* n_evals: 20
* Result:
perf:
classif.ce classif.acc
1: 0.04 0.96
tune_x:
cp
1: 0.03188852
params:
xval cp
1: 0 0.03188852
* ParamSet:
id class lower upper levels default value
1: cp ParamDbl 0 0.05 <NoDefault>
Reprex:
library(mlr3)
library(mlr3tuning)
library(paradox)
ps = ParamSet$new(list(
ParamDbl$new("cp", lower = 0, upper = 0.05)
))
instance = TuningInstance$new(
task = tsk("iris"),
learner = lrn("classif.rpart"),
resampling = rsmp("holdout"),
measures = msrs(c("classif.ce", "classif.acc")),
param_set = ps,
terminator = term("evals", n_evals = 20)
)
tuner = TunerRandomSearch$new()
tuner$tune(instance)
print(instance)
Ah ok. Then I'd say only params? tune_x is in the object anyway but for the printout params should be sufficient then?
Not sure about this. @berndbischl @pfistfl What do you think?
pls change:
a) only show tune_x. thats really the result from tuning. not "params"
b) your print code looks bad in code and output. why not simply use mlr3misc::as_short string
catf(paste0(" ", capture.output(as.data.table(rbind(self$result$perf)))), sep= "\n")
c) also i wouldnt double-indent stuff Result perf Result tune_x seems fine
This ugly code was necessary to indent the data.table
s.
The output looks like this now:
<TuningInstance>
* State: Tuned
* Task: <TaskClassif:iris>
* Learner: <LearnerClassifRpart:classif.rpart>
* Measures: classif.ce, classif.acc
* Resampling: <ResamplingHoldout>
* Terminator: <TerminatorEvals>
* bm_args: list()
* n_evals: 20
* Result perf:
classif.ce classif.acc
1: 0.06 0.94
* Result tune_x:
cp
1: 0.01549386
ParamSet:
id class lower upper levels default value
1: cp ParamDbl 0 0.05 <NoDefault>
* Result perf:
classif.ce classif.acc
1: 0.06 0.94
This is a named numeric vector, why do we print it as data.table()? Would you consider paste(sprintf("%s: %g", names(perf), perf), collapse = ", ")
?
* Result tune_x:
cp
1: 0.01549386
Same here, I don't see why we use the data.table()
printer. sprintf()
+ as_short_string()
?
@mllg didnt you make this PR now completely obsolete with your last commit? close here?
@mllg Thank you. I just used the code from https://github.com/mlr-org/mlr3tuning/commit/1cf4b2efa2c1002b74af9b31d58427fab8925e65
@berndbischl We also wanted to remove the archive output and indicate the state of the TuningInstance
Output looks like this now:
<TuningInstance>
* State: Tuned
* Task: <TaskClassif:iris>
* Learner: <LearnerClassifRpart:classif.rpart>
* Measures: classif.ce, classif.acc
* Resampling: <ResamplingHoldout>
* Terminator: <TerminatorEvals>
* bm_args: list()
* n_evals: 20
* Result perf: classif.ce=0, classif.acc=1
* Result tune_x: cp=0.001542, minsplit=11
ParamSet:
id class lower upper levels default value
1: cp ParamDbl 0 0.05 <NoDefault>
2: minsplit ParamInt 10 12.00 <NoDefault>
test fail, instance --> self
@be-marc pls only push after you have passed local tests, otherwise this back-and-forths eats up time for other. this problem you can easily see locally, travis is that there to leave this out...
Sorry. Should work now.
Closes #206
Output before tuning
Output after tuning
If we just useFixed with new commit$eval_batch
instead of$tune
the output looks like thissince no results were written to
private$.result
by a Tuner class. Should we omit the result output in this case?