Open GStechschulte opened 5 months ago
Prior predictive checks are really useful. And we should make them easier for the users to generate and explore. In PreliZ we have a function (very new and still undertested) to iteratively explore how the prior affects the prior predictive distribution https://preliz.readthedocs.io/en/latest/examples/observed_space_examples.html
It works for models defined in Bambi and also PyMC, and PreliZ
@aloctavodia this is great! Thanks for the link!
See Vincent's Prior Predictive Checks (ppc) with marginaleffects and brms blog post.
The idea is to simulate from the model, without using the data, in order to refine the model before fitting. One major challenge lies in interpretation: When the parameters of a model are hard to interpret, the user will often need to transform before they can assess if the generated quantities make sense, and if the priors are an appropriate representation of available information.
With the
interpret
sub-package, we might be able to plot and summarize prior predictive checks similar to how {marginaleffects} does this. Withinterpret
should we be able to pass a Bambi model and an inference data object that only contains the priors (or prior predictive) for simulation?Bambi already has the
prior_predictive
method. I can imagine this functionality would reduce the amount of cumbersome and boiler plate code needed to produce ppc plots and summaries resulting from matplotlib/seaborn and the transformation of parameter values. At work, I am finding myself performing ppc for hierarchical models and find it useful to not only perform the check at the population level, but also for group specific effects, etc.Just a thought :)