Main Logics: Forecast Model Selection & Evaluation
Handcraft model selection rules to try out 5 various models for evaluation (holdout prediction)
utilize characteristics from feature extraction steps to decide trial models and their parameters
Compare model performance with dynamic configs => DB table [forecast_metrics]
(e.g. window size=365, forecast horizon=60, Frequency='D', sort metric='MAPE', etc.)
Model Family: Naïve, Exponential Smoothing, ETS, ARIMA, Prophet, BATS, Croston, etc.
Variants on seasonal parameter, stationarity, differencing/transformation, applied
Documentation with Name and characteristics:
model library
Latest Code:
model evaluation model prediction Reference: PyCaret Documentation Dependency: Demand Profiling (feature extraction) still owned by Zengyu
Main Logics: Forecast Model Selection & Evaluation
Handcraft model selection rules to try out 5 various models for evaluation (holdout prediction) utilize characteristics from feature extraction steps to decide trial models and their parameters Compare model performance with dynamic configs => DB table [forecast_metrics] (e.g. window size=365, forecast horizon=60, Frequency='D', sort metric='MAPE', etc.)
Model Family: Naïve, Exponential Smoothing, ETS, ARIMA, Prophet, BATS, Croston, etc. Variants on seasonal parameter, stationarity, differencing/transformation, applied
Documentation with Name and characteristics: model library
Related Issues: