LarsBentsen / FFTransformer

Multi-Step Spatio-Temporal Forecasting: https://authors.elsevier.com/sd/article/S0306-2619(22)01822-0
60 stars 16 forks source link

Question about look-back window showed in "Experimental set-up" #11

Closed sunxiaoyao-git closed 10 months ago

sunxiaoyao-git commented 10 months ago

Hi, I found a new question in reading your paper. In your paper -"4.4 Experimental set-up", you wrote: "From the tuning, it was found that a look-back window of 32 time-steps was suitable for the 10-min and 1-h ahead forecasts, increased to 64 for the 4-h forecasts, which seemed reasonable, as longer contexts might be useful to better predict trends further into the future." My quesiton is that whether every model (including MLP,LSTM,Transformer) had the same seq_len?
How to choose the history length to forecast maybe critical in experimentation.

LarsBentsen commented 10 months ago

Yes, all models used the same look-back windows (seq-len) for different forecast horizons. The sequence length is application-dependent and I suggest testing a few different (e.g. treating as a hyperparameter) to see what works best for your particular application. However, looking at things such as autocorrelation plots and similar could help guide the process. I hope this helps! Please feel free to reopen if this did not answer your questions. :)