Open MasaAsami opened 3 years ago
Hi @MasaAsami ,
If I understood you correctly there's already a technique well known in this field called "back-testing". The idea is similar to what you discussed but instead of comparing impacts on training data against test data the approach is to choose a model and perform several evaluations on training data to evaluate its performance (using the same technique you described).
You can then compare several distinct models (adding various elements to it such as local level, trends, seasonal components and so on) and choose the best one to finally run it on test data.
I'd happily merge a PR with some feature like that being added, only requirements I'd ask is for it to be fully unit tested and there will be probably some issues related to performance so training should be performed in parallel as well (and I'm not quite sure how the various models would be chosen, I'm open to ideas).
As for pycausalimpact
vs tfcausalimpact
notice they are essentially the same. I still don't know what happened to the former as I no longer have contact with Dafiti's IT team but you can use the latter interchangeably.
Let me know if this answers your question.
Best,
Will
I see, "back-testing" may indeed be a more appropriate expression.
The sensitivity of the causal effect of this scheme obviously depends on the model we set up, so I think it is actually the same as "AA testing".
I'll try to organize my thoughts and try it out during the holidays.
Thanks for the reply!
Masa
Thank you for a very nice package.
I'm trying to switch from pycausalImapct recently, and I don't fully understand this module yet, so I'll only provide some ideas today. [Idea]. How about adding a feature like so-called A/A test?
[Contents] For the period before the true intervention:
[The following is just an image]