Develop the minimum viable product (MVP) of a benchmarking tool
Description
The benchmarking MVP should test two models with two datasets. The results should should be presented in a way that helps the choice of the best model (table, report in .md, plots, etc.) given appropriate metrics.
The development of this MVP might take 2-3 iterations to validate the benchmarking 'base features'. It will be the foundations for the other dimensions that the benchmarking module will be able to test e.g.
model features (univariate, multivariate, confidence interval, required computation for training, etc.)
performance given different prediction length
performance given different prediction starting time
Goal
Develop the minimum viable product (MVP) of a benchmarking tool
Description
The benchmarking MVP should test two models with two datasets. The results should should be presented in a way that helps the choice of the best model (table, report in .md, plots, etc.) given appropriate metrics.
The development of this MVP might take 2-3 iterations to validate the benchmarking 'base features'. It will be the foundations for the other dimensions that the benchmarking module will be able to test e.g.
Tasks
benchmarking
in https://github.com/fredmontet/ontime/tree/develop/src/ontime/moduledata
module from issue #33Benchmark
class with method such asadd_model(my_model)
,add_dataset()
,run()
, etc.