Mcompetitions / M5-methods

Data, Benchmarks, and methods submitted to the M5 forecasting competition
578 stars 232 forks source link

computation and time needed for benchmarks (and winners submissions) #9

Open rquintino opened 4 years ago

rquintino commented 4 years ago

Hi there, first of all, much thanks for all the effort and insights resulting from this competition (now deep diving on findings paper). Amazing work and contribution!

I was looking for one thing couldn't find so far, would be possible to know or have an idea of compute and time needed by the benchmarks and winning submissions? In practice, it's a relevant dimension for evaluating different approaches.

Example: if I understood properly for exp smooth bottom up, fit was run ~30k times? (number of time series at maximum level). From the code done in parallel I think, but still, prob takes some time.

Would be great to get any info on this.

thanks! (from https://github.com/Mcompetitions/M5-methods/blob/60829cf13c8688b164a7a2fc8c4832cc216bdbec/validation/Point%20Forecasts%20-%20Benchmarks.R)

rquintino commented 4 years ago

just adding for anyone interested in the thread, this related paper has much more information regarding compute needed for similar approaches (stat/ml), not the same dataset but very interesting read.

"Comparison of statistical and machine learning methods for daily SKU demand forecasting" https://www.researchgate.net/publication/344374729_Comparison_of_statistical_and_machine_learning_methods_for_daily_SKU_demand_forecasting