Closed drbenvincent closed 5 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 77.10%. Comparing base (
a64fc0a
) to head (0db64fc
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Failing doctest seems to be some broken import in arviz, nothing internal to causalpy.
I left some comments. Let mw know what you think. I think it would bring a lot of clarity to use the potential outcomes language (just a suggestion)
It would also be nice to indicate briefly after eash method how this could be done in CausalPy. I think it would be a great entrypoint for the library.
Thanks for the review. It's just my edit to the ANCOVA/covariate(s) which might need to be looked at I think.
For the moment I'm trying to steer clear of mentioning the potential outcomes framework, or talk about DAGS and backdoors etc. My rough goal here is to create a series of relatively self-contained knowledge base pages which are relatively focussed. So this docs page is intended to focus on the experiment design side of things, but there will be another docs page focussing on DAGS for the different quasi-experimental designs, and maybe others on the potential outcomes framework or g-computation. Similarly, I'm trying to keep some separation between the theory (in the knowledge base) and practice (in the example notebooks). It might not always be like that, but at the moment that seems like the right structure to tackle things in relatively bite-sized chunks
Got it! Thanks for providing context!
BTW: The test is failing because of https://github.com/pymc-labs/pymc-marketing/pull/608. An arviz
release will fix it.
Remote tests still failing despite there being an arviz release (https://github.com/arviz-devs/arviz/releases/tag/v0.18.0) 12 hours ago
Knowledge Base