Closed antoinecarme closed 1 year ago
https://otexts.com/fpp3/accuracy.html
https://www.sciencedirect.com/science/article/abs/pii/S0169207006000239?via%3Dihub
Another look at measures of forecast accuracy Author: Rob J. Hyndman, Anne B. Koehler Publication: International Journal of Forecasting Publisher: Elsevier Date: October–December 2006
Also add the same scaling for RMSE, the RMSSE as a new performance measure :
Origin : user model on google colab (#PyAF hashtag rocks ;).
https://colab.research.google.com/drive/1zaVQuobR8M63qB-UDDX8ZX37ctl98YIT?usp=sharing
Original Model (MAPE)
Same Model with MASE
Same Model with RMSE (L2)
Same Model with RMSSE (scaled RMSE)
The prediction intervals plots now gives the values of MAPE and MASE for horizon 1 and horizon H by default.
CLOSING. Added to 5.0
When the signal contains zeros, the MAPE values are not defined. MAPE is simply not suitable for such signals ans this can lead to low quality models.
https://stackoverflow.com/questions/41571215/forecasr-accuracy-mape-and-zero-values
https://otexts.com/fpp3/accuracy.html
There is no technical work-around for better using MAPE.
MAPE is not suitable for reporting performance measure values when zeros in the signal, but it is very user-friendly and easy to understand..
The only solution is to use a scaled measure like MASE for model selection by default.
Some benchmarking is needed (#222 ).
It would be nice to have this in PyAF 5.0 (expected on 2023-07-14). So far, so good. DONE.