Closed mloning closed 4 years ago
Hi,
The ML benchmarks of M4 consider a multiplicative seasonality, not an additive one.
By observing your results, it seems like the forecasts produced are not re-seasonalized as they should. Are you sure you re-seasonalize your forecasts by multiplying them with the corresponding seasonal indices?
Apart from that, I cannot really tell what the problem could be. However, if ran properly, the differences between the published and the replicated results should be close to zero.
Hi, thanks for the reply, found the bug in my code.
Hi,
I'm having issues replicated the results for the MLP method, especially for the hourly dataset.
I'm using the hyper-parameter settings found in https://github.com/Mcompetitions/M4-methods/blob/master/ML_benchmarks.py.
Values are percentage difference between published sMAPE and replicated ones, i.e. a value of 100 means 100% difference, positive values indicate the replicated results are worse than published results.
The plot shows part of H1 training series, the full test series and MLP point forecasts, where
y_pred_orig
are the point forecasts found in thesubmission-MLP.rar
file,y_pred_add
are point forecasts I obtain with additive deseasonalisation andy_pred_mul
are point forecasts I obtain with multiplicative deseasonalisation.I find similar patterns for RNN. Are you using additive seasonality by any chance? Any other idea where the deviation may come from?