Closed thomktz closed 1 month ago
Hi @thomktz, for this you can simple run historical forecasts or backtest twice, once for hparam optimization on a validation set, and once for model selection on a test set. These val and test sets can be configured by slicing the target series
, and adapting the start dates. You can use #2301 as a reference.
Does this answer your question?
Feature request: Add support for nested cross-validation to compare metrics between different cross-validation tuned models.
After selecting hyper parameters with cross-validation in Darts, it would be biased to use the cross-val error of the best hyper-parameters of different models and compare them. To do it properly, we should take into account the hyper-parameter selection as part of the pipeline, inside another cross-validation split. More details here.
This could be done through a new
.nested_backtest()
method, or anested: int = False
argument in the.backtest()
method.If anyone has experience doing this with darts series/models, and has a straightforward way to achieve this, then the feature might not be needed.