cchallu / nbeatsx

MIT License
190 stars 49 forks source link

Hyperparameter files #7

Closed mcvageesh closed 2 years ago

mcvageesh commented 2 years ago

Hello Authors,

I'd be grateful if you would also upload the hyperparameters files for each dataset / case. I wanted to check the value of your validation MAE and also the hyperparameter values.

Thanks!

kdgutier commented 2 years ago

Hi @mcvageesh,

We have the updated version of the NBEATSx paper with a summary of our findings of hyperparameter exploration in Appendix A.5. Hope it helps.

mcvageesh commented 2 years ago

Hi @kdgutier ,

Thanks :), I did go through the section A.5 before. The main reason I requested for the hyperparameter files was to avoid running the hyperparameter search all over again, and also to compare the validation MAE of the best model found by the search against some other model I am trying to use. The authors Lago et al. (https://github.com/jeslago/epftoolbox) provided this in their toolbox and it was really helpful, so if you still are working on this and have time, please upload the files - it will be really helpful!

kdgutier commented 2 years ago

@mcvageesh

I would advice to run again hyperparameter selection on an informed/restricted space based on the A.5 appendix suggestions here. As we mention in Appendix A.3 of the paper you can achieve even better results with a fraction of the hyperparameter exploration steps.

The most expensive part of the forecasting pipeline is the rolled windows evaluation, because of the model recalibration. We did it sequentially, but If you have the computational resources I would suggest you to try to parallelize the test evaluation.

mcvageesh commented 2 years ago

Okay, thank you!

mcvageesh commented 2 years ago

Hi again! I am a bit confused about the data shuffling operation used for splitting the train-validation sets. Let us take the function train_val_split(len_series, offset, window_sampling_limit, n_val_weeks, ds_per_day) in src/utils/experiment/utils_experiment.py

Let len_series = 4 365 24 offset = 0 window_sampling_limit = 4 365 24 n_val_weeks = 42 ds_per_day = 24

Then, on running the function, we obtain len(train_days) = 1096 and len(validation_days) = 364. Let us take the day 70, which was picked as a validation day (for example, and hence the days 70, 71, 72, 73, 74, 75, 76 are selected as validation days). Let day 77 not be picked for the validation set, and hence remains part of the training set. If the day 77 remains in the training set, wouldn't that cause data contamination in the sense that the lagged values (day 70, 71, 72, 73, 74, 75, 76) used to predict day 77 in the training set are part of the validation set?

Is this problem avoided in some way in the code? Or would you say that this is a necessary evil, and cannot be avoided without incurring significant computational time (rolling validation) or significant reduction in training samples (blocked validation)?

Thank you!