unit8co / darts

A python library for user-friendly forecasting and anomaly detection on time series.
https://unit8co.github.io/darts/
Apache License 2.0
7.56k stars 829 forks source link

Include validation series with hyperparameter optimization in Darts #2301

Open ETTAN93 opened 3 months ago

ETTAN93 commented 3 months ago

When tuning hyperparameters for non time-series data, normally one would split the dataset into training set, validation set and test set. The validation set is then used to test which set of hyperparameters perform the best.

How does this work for historical backtest in time-series forecasting? I referred to the two examples here in Darts: example1 and example2 here in Darts.

For example, when just doing a normal historical backtest, assuming I have hourly data from 2020-01-01 to 2023-12-31. I would just specify when the test set starts, e.g. 2023-01-01 and carry out the historical backtest that way, e.g.

model_estimator.historical_forecasts(
        series=target_hf, 
        past_covariates= None,
        future_covariates= future_cov_hf,
        start='2023-01-01', 
        retrain=30,
        forecast_horizon=24,
        stride=24,
        train_length = 2160,
        verbose=True,
        last_points_only=False,
    ) 

This means that the model is retrained every 30 days with the past 90 days of data. It predicts the next 24 hours every 24 hours.

If I want to do now hyperparameter optimization with optuna and Darts, would this make sense:

def objective(trial):
    forecast_horizon = 24
    fc_lags_dict = {}
    for feature in future_cov:
        future_cov_lags_lower_bound = trial.suggest_int(f'fc_lb_{feature}', -96, -1)
        future_cov_lags_upper_bound = trial.suggest_int(f'fc_up_{feature}', 1, 72)
        fc_lags_dict[feature] = list(range(future_cov_lags_lower_bound, future_cov_lags_upper_bound))

    target_lags_lower_bound = trial.suggest_int('target_lags_lower_bound', -96, -1)

    model = LinearRegressionModel(
        lags=list(range(target_lags_lower_bound, 0)),
        lags_past_covariates=None, 
        lags_future_covariates=fc_lags_dict,
        output_chunk_length=forecast_horizon,
        multi_models=True,
    )

    hf_results = model.historical_forecasts(
            series=target_hf, 
            past_covariates= None,
            future_covariates= future_cov_hf,
            start='2023-01-01', 
            retrain=30,
            forecast_horizon=24,
            stride=24,
            train_length = 2160,
            verbose=True,
            last_points_only=False,
        ) 

    mae= return_metrics(hf_results )
    return mae

but this then uses the full set of data to do the hyperparameter optimization. do I need to split the data out separately for the test set?

dennisbader commented 3 months ago

Hi @ETTAN93, yes for this you can simply define a val_end as a pd.Timestamp and then set series=target_hf[:val_end] when calling historical forecasts.

For the final test set, adjust the start date to be after the val_end and use the entire target_hf.

ETTAN93 commented 2 months ago

Hi @dennisbader, just to clarify but what you mean:

Assuming I have a dataset that goes from 2020-01-01 to 2023-12-31, are you saying to split the dataset into for example:

train_start = '2020-01-01'
val_start = '2022-01-01'
val_end = '2022-12-31'
test_start = '2023-01-01'

Then within the objective function for hyperparameter optimization, you would set:

hf_results = model.historical_forecasts(
         series=target_hf[ : val_end], 
         past_covariates= None,
         future_covariates= future_cov_hf,
         start=val_start, 
         retrain=30,
         forecast_horizon=24,
         stride=24,
         train_length = 2160,
         verbose=True,
         last_points_only=False,
     ) 

After getting the hyperparameters, you would then evaluate on the test set again with:

hf_results = model.historical_forecasts(
         series=target_hf, 
         past_covariates= None,
         future_covariates= future_cov_hf,
         start=test_start, 
         retrain=30,
         forecast_horizon=24,
         stride=24,
         train_length = 2160,
         verbose=True,
         last_points_only=False,
     ) 

Is that correct?

dennisbader commented 2 months ago

Hi @ETTAN93, yes, that's exactly it 👍 (assuming that your frequency is "D"/daily)

noori11 commented 1 month ago

Hi @dennisbader,

This seems as the models hyper parameters are being tuned in the interval from 2022-01-01 until 2022-12-31, then used for all the forecasts made from 2023-01-01 and forward.

However, what if you wanted to do hyper parameter optimization every month in an expanding- or sliding window cross validation instead? How would you structure it using Darts?