Open sebastianduchene opened 1 year ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Thanks @sebastianduchene! Any idea why the results changed so strongly?
Hey @twiecki, That's a good point. I thought it was Monte Carlo error for a second, but then I realised that it was something much simpler. The notebook has a random seed but it isn't really working, meaning that the simulated data sets can look quite different. For example, in A Simple Case the data simulated from the Bernoulli have 91 'successes', but the expected number is 0.05 1500 = 75, and when I ran it in my machine I got 78, which is not too different from the expected and therefore the true value of p_A falls well within the posterior. Similarly, in A and B Together*, the data are simulated from a Bernoulli, and probably are a bit different to those from the model, leading to different posterior distros for p_a, p_B and delta.
In contrast, in Example: Challenger Space Shuttle Disaster, everyone gets the same posterior because the data are fixed.
I tried doing : np.random.seed(RANDOM_SEED)
instead of: rng = np.random.default_rng(RANDOM_SEED)
which seems to work. I'll give this a shot tomorrow and push it if it works well.
Actually the second approach is the recommended one, but we then need to pass the rng
everywhere, which is a bit clunky too. So maybe try the first approach.
model.initial_values.items is deprecated. I updated it to model.rvs_to_initial_values to print variables and initial values correctly.