Mcompetitions / M4-methods

Data, Benchmarks, and methods submitted to the M4 forecasting competition
749 stars 317 forks source link

MLP with additive seasonality? #26

Closed mloning closed 4 years ago

mloning commented 4 years ago

Hi,

I'm having issues replicated the results for the MLP method, especially for the hourly dataset.

I'm using the hyper-parameter settings found in https://github.com/Mcompetitions/M4-methods/blob/master/ML_benchmarks.py.

Yearly Quarterly Monthly Weekly Daily Hourly
MLP -7.910408 -7.948233 -4.790635 -44.658715 -51.489338 147.500501

Values are percentage difference between published sMAPE and replicated ones, i.e. a value of 100 means 100% difference, positive values indicate the replicated results are worse than published results.

Screenshot 2020-03-31 at 11 32 54

The plot shows part of H1 training series, the full test series and MLP point forecasts, where y_pred_orig are the point forecasts found in the submission-MLP.rar file, y_pred_add are point forecasts I obtain with additive deseasonalisation and y_pred_mul are point forecasts I obtain with multiplicative deseasonalisation.

I find similar patterns for RNN. Are you using additive seasonality by any chance? Any other idea where the deviation may come from?

vangspiliot commented 4 years ago

Hi,

The ML benchmarks of M4 consider a multiplicative seasonality, not an additive one.

By observing your results, it seems like the forecasts produced are not re-seasonalized as they should. Are you sure you re-seasonalize your forecasts by multiplying them with the corresponding seasonal indices?

Apart from that, I cannot really tell what the problem could be. However, if ran properly, the differences between the published and the replicated results should be close to zero.

mloning commented 4 years ago

Hi, thanks for the reply, found the bug in my code.