yuqinie98 / PatchTST

An offical implementation of PatchTST: "A Time Series is Worth 64 Words: Long-term Forecasting with Transformers." (ICLR 2023) https://arxiv.org/abs/2211.14730
Apache License 2.0
1.37k stars 248 forks source link

how to explain the performance discrepancy in different model? #82

Open howard-hou opened 8 months ago

howard-hou commented 8 months ago

I've noticed that the MAE and MSE for the same model vary significantly in different papers. For example, in the case of the "Informer" model, the MSE for predicting a length of 336 in the "ETTh1" dataset is 0.222, but in the "PatchTST" paper, the "Informer" model's MSE for predicting the same length in the "ETTh1" dataset is 1.038. What could be the reason for this discrepancy?

zzkkzz commented 7 months ago

This may depend on whether you use data normalization.