Closed dengzeyu closed 8 months ago
@arm61 this page in the docs talks about bootstrapping (which needs updating). And possibly means these examples were generated with the old variance estimation scheme.
Just for tracking - this issue is related to the joss submission: https://github.com/openjournals/joss-reviews/issues/5984#issuecomment-1793653579
This is a known issue cause we use a stochastic MCMC sampling. I have added the random state to the docs and removed the out of date mention of the bootstrapping in https://github.com/bjmorgan/kinisi/commit/e2dcee67d63c362d212cba8ed25e0c0b8c3e560b and https://github.com/bjmorgan/kinisi/commit/54734c670825e8b330f0db3e62e70be6df7a485c
@arm61 I've rerun the notebook, but I still get different results compared to the one shown on documention website. For the same example: https://kinisi.readthedocs.io/en/latest/vasp_dj.html
diff.D_J.n, diff.D_J.con_int()
The webpage shows
(1.6995159773675975e-05, array([1.40932494e-06, 3.78731225e-05]))
However, I got
(1.6953192500355356e-05, array([1.98685233e-06, 3.81374320e-05]))
In addition, these results differ a lot compared to your previous version as shown above, did you change the input parameters?
I suspect that this is due to small variations in the exact environment that you have. Note that we do no aspire to complete reproducibility between your build of the docs and those online (however, if you rerun the notebook you should get the same numbers repeatably). I am happy to add a statement to this effect to the documentation, but cross-machine reproducibility is a very hard problem so it feels redundant.
For reference, the difference in the "best-fit" estimate of D_J is ~0.1% the 95% compatibility interval.
I suspect that this is due to small variations in the exact environment that you have. Note that we do no aspire to complete reproducibility between your build of the docs and those online (however, if you rerun the notebook you should get the same numbers repeatably). I am happy to add a statement to this effect to the documentation, but cross-machine reproducibility is a very hard problem so it feels redundant.
Yeah please add a statement then I'll close this issue.
Isn't this the case for any numerical (rather than analytical) method? @arm61 maybe a generic statement to cover the entire docs rather than just on this page (and then repeating on other pages?)
I have added something to the FAQ about this in https://github.com/bjmorgan/kinisi/commit/0d9a4ab13850301aaa0a1e94373004dfd77ff47e
This seems like a good place to me.
I just run the Notebook examples, and I found the number I got is slightly different compared to the number shown on documentation webpage. I assume this is normal right? If so, please indicate there will be some differences when rerun the notebook examples.
One of the example is on https://kinisi.readthedocs.io/en/latest/vasp_dj.html
I got
on the webpage the results are: