Closed jaredsnyder closed 1 month ago
Here's a notebook to validate the PR. We're not getting an exact match on the search forecasts but @m-d-bowerman and I have concluded the models match and the difference is due to how prophet sets the seed: https://colab.research.google.com/drive/1dLeLUz_99ln9PC1AG-izZILj9-zIJHmJ#scrollTo=70upJ3eUTvkh
Note on the validation: https://docs.google.com/document/d/1kG75iCFHSxBYVz6EcaYhozOZ9KfK7ncKvB5YfOmaB6I/edit?usp=sharing
WRT code complexity: Yeah that is the definite downside to trying to "promote" models with segments so they'd be easier to use. I can take another pass at documenting/commenting so it's easier to work with, and can brainstorm ways to clean it up. We could also meet to try and come up with something if you think that'd be useful
Another thing I want to look into is trying to use DARTs (https://unit8co.github.io/darts/) which might eliminate a lot of the wrapper code around prophet, and maybe some of the stuff for handling data too
Darts does look neat! I tried to evaluate it as part of the KPI model selection exercise that we used to decide on prophet, but at the time they didn't have M1 support and that was enough of a blocker for local development that I didn't explore it further.
Changes:
kpi_forecasting.py
before passing data to model classesBaseEnsembleForecast
, created to deal with segmented models like FunnelForecast used to implementProphetAutotunerForecast
, created to implement automated hyperparameter tuningFunnelForecast
recreated as aBaseEnsembleForecast
that uses aProphetAutotunerForecast
as the base modelsummarize
andwrite_functions
, along with all the functions called within them, moved outside of classesChecklist for reviewer:
.circleci/config.yml
) will cause environment variables (particularly credentials) to be exposed in test logs