Closed junpenglao closed 8 years ago
The issue is probably the lack of gibbs updating of vector-valued variables. Thus, a jump is only accepted if all binary values produce a good logp. This PR might be helpful: https://github.com/pymc-devs/pymc3/pull/799
So you can try: pip install git+https://github.com/pymc-devs/pymc3@gibbs
and then do Metropolis(gibbs='random')
.
Thank you for your reply @twiecki , however the problem is still remain... There is still no jump in the binary vector.
You should not be assigning Metropolis
to Bernoulli
random variables. Its best just to let PyMC3 select step methods for you, unless you have reason to do otherwise. All you need to call is:
trace7 = pm.sample(1e4, start=start)
Thanks @fonnesbeck , now the Bernoulli
variable does jump. However, it is really easily stuck in local minimal and the model is not converging to the optimal estimation
I am not sure if it's related it, but pm.find_MAP()
return zi
all ones. That's why I manually set it to random start['zi']=np.random.binomial(1,.5,p)
@junpenglao I just pushed an update to the gibbs
branch that adds Gibbs sampling to the BinaryMetropolis
sampler. Can you update and move to that branch and try again using automatic step method assignment?
Just pushed another update. If you test the gibbs
branch now there should be a new step method assigned: Assigned BinaryGibbsMetropolis to zi
and you should get better convergence and mixing of the binary assignments.
Yes that works perfectly, thanks so much @twiecki
@junpenglao have you ported more examples from the Wagenmakers book?
@twiecki I have. I am working through his book and so far at chapter 8. I was planning to upload them to Github when I finish. It will be my honor if you guys would like to include them as example.
Plus one for this :)
On Tue, Feb 16, 2016 at 4:50 PM, Junpeng Lao notifications@github.com wrote:
@twiecki https://github.com/twiecki I have. I am working through his book and so far at chapter 8. I was planning to upload them to Github when I finish. It will be my honor if you guys would like to include it as example.
— Reply to this email directly or view it on GitHub https://github.com/pymc-devs/pymc3/issues/981#issuecomment-184741152.
Peadar Coyle Skype: springcoilarch www.twitter.com/springcoil peadarcoyle.wordpress.com
@junpenglao Definitely, that book got me started with Bayesian stats.
Me too! I only wish I have discovered it sooner.
twiecki commented 7 minutes ago @junpenglao Definitely, that book got me started with Bayesian stats.
@twiecki @springcoil I had ported most of the book in https://github.com/junpenglao/Bayesian-Cognitive-Modeling-in-Pymc3. Some models still need to be further optimized but most of them return the same result as in the book.
This is really nice, thanks!
Hi there, I am fairly new to Bayesian computation and pymc3, so this is probably a very naive question. I am trying to adapt a pymc2 code to pymc3. It is a latent-mixture model from Lee and Wagenmakers' Bayesian Cognitive Modeling - A Pracitcal Course on page 93 (Chapter 6 example 7). While the pymc2 code runs without any problem, there is a problem in the pymc3 model, one of the stochastic variable is not being sampled properly...
pymc2 code:
pymc3 code:
(I tried different sampling method but the result is the same.)
The problem seems to be that: the grouping variable vector "zi" was not updated in the sampling. It stuck in the initial value and never being update in each sampling.
You can have a look at the output from the notebook here