mlr-org / mlr3tuning

Hyperparameter optimization package of the mlr3 ecosystem
https://mlr3tuning.mlr-org.com/
GNU Lesser General Public License v3.0
53 stars 5 forks source link

Visualization of nested tuning during resampling? #346

Closed pat-s closed 1 year ago

pat-s commented 1 year ago

Might also fit into {mlr3viz} as it concerns visualization but I think the discussion should go here as it's about class conversion/return.

Currently we have autoplot.TuningInstanceSingleCrit().

When tuning within resample() and setting the appropriate flags, one can do extract_inner_tuning_archives() or extract_inner_tuning_results().

The returned DT should have the same information as a TuningInstanceSingleCrit. Yet extract_inner_tuning_archives() returns a data.table directly and not a TuningInstanceSingleCrit and autoplot() won't work with it.

Questions:

be-marc commented 1 year ago

I don't think it makes sense. For example, the method Archive$best() which returns the best scoring evaluation. A combined archive would contain hyperparameter configurations that were evaluated on different resamplings. You cannot compare the scores with each other in a meaningful way.

And also

Keep in mind that nested resampling is a statistical procedure to estimate the predictive performance of the model trained on the full dataset. Nested resampling is not a procedure to select optimal hyperparameters.

(mlr3book)

So be careful when analyzing the hyperparameter configurations and scores of the inner resampling loop. This post gives you an idea of what you can do with the results.

pat-s commented 1 year ago

I don't think it makes sense. For example, the method Archive$best() which returns the best scoring evaluation. A combined archive would contain hyperparameter configurations that were evaluated on different resamplings. You cannot compare the scores with each other in a meaningful way.

So be careful when analyzing the hyperparameter configurations and scores of the inner resampling loop. This post gives you an idea of what you can do with the results.

Thanks, I am aware. It is not about whether analyzing tuning results of the inner loop "makes sense" or provides room for misinterpretation. My point is that in the first place it should not matter where a tuning result comes from (nested CV or "direct" tuning) - I want to be able to do the same things with it 🙂 I.e. if I can plot a "direct"/simple tuning result I also would like to do so for a specific tuning result from the inner loop of a nested CV.

I am aware that the extraction of the inner tuning results would need to change for this - including the returned object type.

But AFAIR this was possible in the old {mlr}, I did that in my first paper where I analyzed the effects of spatial tuning in the inner loop and looked at individual tuning results (on the repetition level).

be-marc commented 1 year ago

I.e. if I can plot a "direct"/simple tuning result I also would like to do so for a specific tuning result from the inner loop of a nested CV.

You can do this with autoplot(rr$learners[[1]]$tuning_instance). Binding tuning instances does not make sense. We could create a special result class for nested cv but this seems over-engineered.

I analyzed the effects of spatial tuning in the inner loop and looked at individual tuning results

You analyzed the effects of nested resampling itself. That probably doesn't happen very often. autoplot(rr$learners[[1]]$tuning_instance) is not very user-friendly but should be enough for these cases.

pat-s commented 1 year ago

You can do this with autoplot(rr$learners[[1]]$tuning_instance).

That's what I needed! I think with this, the only thing I am missing is a link in ?extract_inner_tuning_results mentioning this: i.e. that extract_inner_tuning_results returns a DT but if the TuningInstance is wanted, one can use the accessor from above.

You analyzed the effects of nested resampling itself. That probably doesn't happen very often. autoplot(rr$learners[[1]]$tuning_instance) is not very user-friendly but should be enough for these cases.

I agree and it's even good that it's "not so easy" - but I think it's important enough to mention this shortcut somewhere in the help page, I think extract_inner_tuning_results() could be a good place as this is where I started to look when aiming to get the tuning results out and visualized.

be-marc commented 1 year ago

You can now do

tab = extract_inner_tuning_results(rr, tuning_instance = TRUE)
autoplot(tab$tuning_instance[[1]])

Updated the documentation of extract_inner_tuning_results. Thanks for you comments! Closed by #348.