Closed wmf1991yeah closed 3 years ago
The easiest way to solve this is probably to increase the burn in steps and the sample steps.
For example the daily running scripts on the old repository have a burn in of 1000 steps and also 1000 samples.
I think you can figure that out ;)
This error can also occure if your priors are not set correctly, but one can only (as far as I know) test this by trial and error.
Thank you very much! I try to add the number of steps and samples. Our priors is default distribution in the paper. In addtion, Is the PyMC3 support GPU? Can pymc3 run on the GPU?
I doubt the rhat error is caused by your priors, test it with more samples/burnin first.
Pymc3 uses theano as backend. Theano supports gpu calculation but if you do not have special hardware for that I doubt I will give you any perfomance increases.
Havent tested that though, feel free to update us on that if you get it running.
Thank you very much! I try it using more samples (tunes=500, draws=1000, chains=2), then I achieve better result. I don't learn to how adding GPU support in pyMC3 although I have GPU hardware (GTX 1080Ti).
The Hamiltonian Monte Carlo algorithm is inherently sequential. So you won't have an advantage by using a GPU
OK, I see.
A problem exists: "ERROR [pymc3] The rhat statistic is larger than 1.4 for some parameters. The sampler did not converge", when I run the PyMC3 sampleing using SEIR model. The problem as follow:
How to address the problem?
PS: The model have a larger number of parameters because more change points are set (8 change points). NUTS: [sigma_obs, delay_log, incub, I_begin, new_E_begin, mu, transient_len_7log, transient_len_6log, transient_len_5log, transient_len_4log, transient_len_3log, transient_len_2log, transient_len_1log, transient_day_7, transient_day_6, transient_day_5, transient_day_4, transient_day_3, transient_day_2, transient_day_1, lambda_7log, lambda_6log, lambda_5log, lambda_4log, lambda_3log, lambda_2log, lambda_1log, lambda_0log]