Closed sycoforbidden closed 4 years ago
It seems the problem lies in the reducebatchmarkresults()
function. It doesn't list all the learners used in the benchmark experiment, only the last one it seems. The results in BM$results
have the correct naming convention.
EDIT: After testing it out, it seems like the BMR functions work only assuming that the learners and tasks are all combinations of each other, which may not be true with batchmark(). I don't know how to solve this.
BM$learners
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $<NA>
#> NULL
#>
#> $regr.plsr.tuned
#> Learner regr.plsr.tuned from package pls
#> Type: regr
#> Name: ; Short name:
#> Class: TuneWrapper
#> Properties: numerics,factors
#> Predict-Type: response
#> Hyperparameters: method=simpls
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
when one needs to make several tasks that should not be used by a specific learner (aka not every possible combination of learner/tasks), using reduceBatchmarkResults is fine, but getBMRAggrPerformances() does not work.
The following is the use case. I want to do a benchmark where these learners have to be changed so that the max ncomp for pls can be changed according to a task with reduced features.