Closed bange83 closed 1 year ago
Hi,
Thanks for investigating this. You're right about changing the iloc to loc. Initial thoughts are that the results are pretty similar in the point predictions but the python version has wider confidence intervals. It's worth noting they are using different estimation methods so I wouldn't expect them to give identical results at the moment. I'm not sure what's going on with the x-axis markers on the plot either, I might raise a different issue for that
Hi @jamalsenouci , first of all, thank you for this python implementation of CI!
I've been trying to understand why the credible intervals for the python implementation using MLE is wider than from Google's MCMC but couldn't figure this out yet. Do you know why this happens?
As far as theory goes I can't understand why they would lead to such different results, still I imagine that MLE should lead to more precise results given its closed solution.
In Google's code there's this upper boundary asserted to the standard deviation of the level, could this explain the difference? Maybe the markovian sampling is not allowed to go further than the standard deviation of the input data which works as a cap for the ci given y
can't vary more than 1 sdy
.
Couldn't find in the paper an explanation for that as well.
closing in favour of #42
Hey Jamal, I played a little bit with your package and I really appreciate your work a lot ! I am looking forward to port some R stuff to Python and now it seems nearly possible thanks to your effort :)
Unfortunately I encoutered some differences between These two worlds which I can not explain at the Moment. See below. If you have any idea let me know.
R Version
Python Version
So the Python Version seems much more restrictive.
By the way: In inferences.py I had to change
to
using python 3.6.3 causalimpact 0.1.1
numpy 1.13.3
pandas 0.21.1
seaborn 0.8.1
statsmodels 0.8.0
zeromq 4.1.3 0
and the python plot shows wrong timestamp-markers :)