Nixtla / neuralforecast

Scalable and user friendly neural :brain: forecasting algorithms.
https://nixtlaverse.nixtla.io/neuralforecast
Apache License 2.0
2.92k stars 333 forks source link

Rolling window in the predict() or cross_validation() method #1062

Open ZhikangLai opened 1 month ago

ZhikangLai commented 1 month ago

Description

As @evandrocardozo described, We have no idea how to use neuralforecast package to do a recursive multi-step forecasting task. For instance, suppose now we have a time series dataset of 1000 rows 'df'. We take the first 800 rows as the training set, and the rest is the testing set. Now, we need to adopt the following strategy to finish predict: use set $(X{t-15}, X{t-14},...,X{t})$ predict $X{t+1}$ it's also mean that use the head of 15 step to 16th step value. And than it will just like a sliding window go through the full of dataset.

the floowing code is an example: from neuralforecast import NeuralForecast from neuralforecast.auto import AutoLSTM from neuralforecast.losses.pytorch import MQLoss from ray import tune LSTM_params = { "input_size": tune.choice([15]), "encoder_hidden_size": tune.choice([50, 100, 200, 300]), "encoder_n_layers": tune.randint(1, 4), "context_size": tune.choice([5, 10, 50]), "decoder_hidden_size": tune.choice([64, 128, 256, 512]), "learning_rate": tune.loguniform(1e-4, 1e-1), "max_steps": tune.choice([500]), "batch_size": tune.choice([16, 32]), "random_seed": tune.choice([42]), 'enable_progress_bar': False,'logger': False } num_samples = len(df) num_train = int(0.8* num_samples) X_train = df[:num_train] X_test = df[num_train:] h = 1 loss_fun = MQLoss(level=[95]) models = [AutoLSTM(h = h, loss = loss_fun, config = LSTM_params , num_samples = 15)] nf = NeuralForecast(models = models, freq = '1D', local_scaler_type = 'minmax') cv_df = nf.cross_validation(df=df, refit=0, n_windows = len(X_test))

Here I'm not sure if setting "n_windows = len(X_test)" is correct process, But I set this can got the result like recursive multi-step forecasting.

Link

No response

elephaint commented 1 month ago

Hi, not sure I fully understand the question. If you want to perform multistep forecasting, you can set the horizon h to 15?

Let me know if changing the forecast horizon to 15 solves the issue for you.

ZhikangLai commented 3 weeks ago

Hi, not sure I fully understand the question. If you want to perform multistep forecasting, you can set the horizon h to 15?

Let me know if changing the forecast horizon to 15 solves the issue for you.

Hi, Mr. @elephaint . Something like this website show: https://cienciadedatos.net/documentos/py54-forecasting-with-deep-learning. It is called the Backtesting method. 1723430477503

marcopeix commented 2 weeks ago

You can do that using the cross_validation method. You can either set n_windows, which will reserve the last n_windows * h timesteps for testing. Or, you can set val_size and test_size, and then set n_windows=None.

There is a complete example of that in the documenation here.