mdnunez / pyhddmjags

Repository for example Hierarchical Drift Diffusion Model (HDDM) code using JAGS in Python. These scripts provide useful examples for using JAGS with pyjags, the JAGS Wiener module, mixture modeling in JAGS, and Bayesian diagnostics in Python.
GNU General Public License v3.0
25 stars 9 forks source link

Question about prior distributions #1

Open AGhaderi opened 3 years ago

AGhaderi commented 3 years ago

Dear Dr. Nunez, Thank you for your shares and practices. Your codes are useful for me to develop my own work. Here, I have questions about the prior distribution of parameters that you have chosen in a stan model file named ‘stancode/nolapse_test.stan’. Please, consider the following prior distributions in the file //Between-participant variability in choice A start point bias betasd ~ gamma(.3,1) or // Hierarchical start point bias towards choice A betahier ~ normal(.5, .25);

How did you put the mean and std values for ddm parameters? And what is the best strategy to determine prior distributions?

mdnunez commented 3 years ago

The short answer is that I pick prior distributions based on previous publications, and such that the prior distributions have weight over plausible values of the parameters. For instance, random draws from a normal distribution with a mean of .5 and a standard deviation of .25 will result in 67% of those draws within .25 and .75, and 95% of those draws within 0 and 1. [0, 1] is the domain of beta (relative start point bias), so it makes sense for beta's mean parameter across participants to be within or close to this domain.

A better answer is that there is (1) ongoing research about what the best prior distributions are, depending upon the type of sampler, and (2) many philosophical discussions about whether to use "informative" (generally narrow) or "weakly informative" (generally wide) priors. As modelers we should experiment with different priors in simulation to see how and if they change the results significantly. However I've never found that different "weakly informative" priors changed results much based on posterior distributions of hierarchical DDM parameters. Prior distributions can change posterior distributions if those priors are very narrow, such that values in that parameter's domain are near impossible ("informative" priors) . For instance, a prior of betahier ~ normal(.5, .01) would restrict the posterior distribution of betahier to be approximately in the domain [.46, .54], within 4 standard deviations on both sides of the mean.

See a discussion about this topic by Andrew Gelman here: https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations

AGhaderi commented 3 years ago

Dear Dr. Nunez,

Thank you for your feedback and consideration, I see your point.

Kind regards, Amin Ghaderi-Kangavari

mdnunez commented 3 years ago

No problem Amin! I will leave this Issue Open. I don't think the question about the best prior distributions to use in Hierarchical Drift-Diffusion Models (HDDMs) is necessarily "Solved".