Open solcmichDS opened 1 year ago
Good news, this is already supported since darts==0.23.0!
As a reference, the see the documentation here
I might have overlooked something, but I don't still see the functionality. It's true that the backtest method supports multiple time series, but the re-training is still always done on single one. This is seen in the code here (on line 729) there is an outer iterator iterating over provided series and constructing the training dataset from this single series (which is not the desired behavior for global models). Also consider this code here:
from darts.datasets import ETTh1Dataset
from darts.models import TFTModel
# len of this TS is 726
ts = ETTh1Dataset().load().resample('1D')
ts1 = ETTh1Dataset().load().resample('1D')
multiple_ts = [ts, ts1]
model = TFTModel(input_chunk_length=100, output_chunk_length=28, add_relative_index=True, n_epochs=10)
model.historical_forecasts(multiple_ts, stride=200, last_points_only=False)
As the length of the resampled series is 726, and stride is 200, I would expect the training to be done only 3 times (for each point in history), instead of that, the training is done 3 times for each time series, so 6 times in total. Plus according to the code, the model is trained on the single time series each time.
It is also somewhat connected to this issue.
We discussed this and believe it can be a valuable addition to historical forecasts in the long term. There are many scenarios here that we need to take into account:
I will add this to our backlog but first we prioritize making historical forecasts more efficient for our RegressionModels and TorchForecastingModels.
I'm not 100% sure, but I feel it might make sense at some point to differentiate training series from evaluation series in historical_forecasts()
: the evaluation time steps would be determined by each of the evaluation series; and for each such evaluation time step we would produce a corresponding training set by trimming all the training series by retaining their values prior to the time step only. This decoupling of evaluation and training series would make it easier to specify which series to train and eval on (as it would typically not be the same series in general).
Thanks for the answer. As this is kind of a critical feature for our ongoing project, I will try to write some easy version myself (e.g. the one where user provide the time series of same length) and then provide it here for you to have some reference / starting point. Does it make sense? Thanks.
I'm trying to run historical forecasts using a TorchForecastingModel
, training each iteration on a list of timeseries (like when passing a list of timeseries to model.fit()
), however historical_forecasts
loops over each series in the list and runs a separate historical forecast for each series.
I am not sure if that is the same issue as this one, otherwise I will create a new one.
@connesy, it's the same issue.
Is your feature request related to a current problem? Please describe. I need to be able to robustly backtest any ForecastingModel so I can he a valid view how the model performs over time. As the biggest added value of global models is that it can be trained on multiple series at once, not having that functionality available during backtesting is a big flaw.
Describe proposed solution Add support for global models backtest on multiple time series (not just consecutively, but to actually execute the training during backtesting on the whole dataset).
Describe potential alternatives I dont see any.
Additional context Add any other context or screenshots about the feature request here.