2) Determining/benchmarking best model similar to rank_results() for different resampling methods (e.g. group_vfold_cv(cities) vs. spatial_clustering_cv(coords = (x, y))
3) Ability to input a list of multiple resampling methods into model specifications e.g. workflow_set(resamples = list(*)), similar to passing workflow_set(models = list(*), preproc = list(*)) (or ability to pass a resamples list akin to workflow_map(resamples = list(*)))
Here are some example questions -- would spatial_clustering_cv(coords = (x, y) be better or would group_vfold_cv(cities) be better for model tuning? How much would granularity matter in model tuning e.g. group_vfold_cv(cities) vs. group_vfold_cv(municipalities)? How much is the model effected by resampling efficiency (e.g. vs. bootstraps)?
An example decision making point could be that if there is small accuracy loss difference but big run time (or electric costs) difference between cities vs. province, then province would be selected. The results could be shown from the output of rank_results(), with a column showing which resampling was used.
Then for utility, saving this model with a particular resampling method so this model specification can be reused right away again would be nice (would this be tidypredict_fit()? augment()? bake()?).
Here are few scenarios I would like to describe in this feature request:
1) Combining/crossing multiple resampling methods (e.g.
spatial_nndm_cv()
crossed withsliding_period()
resamples)2) Determining/benchmarking best model similar to
rank_results()
for different resampling methods (e.g.group_vfold_cv(cities)
vs.spatial_clustering_cv(coords = (x, y)
)3) Ability to input a list of multiple resampling methods into model specifications e.g.
workflow_set(resamples = list(*))
, similar to passingworkflow_set(models = list(*), preproc = list(*))
(or ability to pass a resamples list akin toworkflow_map(resamples = list(*))
)Here are some example questions -- would
spatial_clustering_cv(coords = (x, y)
be better or wouldgroup_vfold_cv(cities)
be better for model tuning? How much would granularity matter in model tuning e.g.group_vfold_cv(cities)
vs.group_vfold_cv(municipalities)
? How much is the model effected by resampling efficiency (e.g. vs. bootstraps)?An example decision making point could be that if there is small accuracy loss difference but big run time (or electric costs) difference between cities vs. province, then province would be selected. The results could be shown from the output of
rank_results()
, with a column showing which resampling was used.Then for utility, saving this model with a particular resampling method so this model specification can be reused right away again would be nice (would this be
tidypredict_fit()
?augment()
?bake()
?).