thuml / Autoformer

About Code release for "Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting" (NeurIPS 2021), https://arxiv.org/abs/2106.13008
MIT License
2k stars 429 forks source link

Standard way for evaluation multistep forecasting #164

Closed Med-Rokaimi closed 1 year ago

Med-Rokaimi commented 1 year ago

Thank you again for the amazing work. I would like please ask about the evaluation method:

if I have a test data set size of 65 observations, suppose that the prediction length is 20.

So, each observation of the test dataset will generate 20 predictions = array(65,20).

Now, to calculate the evaluation metrics MAE between(predictions, trues). what is the standard way to do so in timeseries forecasting? I mean, should I: Take the first 20 predicted values and compare them with the first 20 true values. Calculate the absolute difference between each predicted value and its corresponding true value. or what? Can you please give me hints or some codes? This is what I have done: preds = np.array(np.reshape(predictions, (65, 20))) trues = np.array(np.reshape(values, (65, 20))) preds= mm.inverse_transform(preds) trues= mm.inverse_transform(trues) preds= preds[0:,1] trues= trues[0:,1] metrics = metric(preds, trues)

wuhaixu2016 commented 1 year ago

Yes, I think your code is right.