Open drbenvincent opened 1 week ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Good progress so far. But we are missing separate Bayesian $R^2$ for training and validation phases.
Synthetic control figure:
Interrupted time series figure:
Attention: Patch coverage is 90.47619%
with 4 lines
in your changes missing coverage. Please review.
Project coverage is 85.81%. Comparing base (
f6fd97c
) to head (444e363
).
Files | Patch % | Lines |
---|---|---|
causalpy/pymc_experiments.py | 83.33% | 4 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Based on one of @cetagostini 's PR's (https://github.com/pymc-labs/CausalPy/pull/368), I'm wondering if we should add a small feature to calculate a ROPE based on the validation period. Something a bit like this:
Any thoughts/comments welcome. I'm not convinced this is a good idea yet - especially because once we add in actual time series models then the credible interval will increase as we forecast further into the future.
test_its
was testing synthetic control rather than interrupted time seriesTODO
intervention_time
kwarg and add in additional logic to the existing classes?ValueError
whenvalidation_time
>=treatment_time
๐ Documentation preview ๐: https://causalpy--367.org.readthedocs.build/en/367/