Open jellis18 opened 7 years ago
I think the most sensible way to do this is to have a ParameterGroup
or ParameterCollection
object that carries the joint prior. scipy.stats
provides multivariate distributions subclassed from multi_rv_generic
. The ParameterGroup
can be initialized with a joint prior in the same way the individual parameters are.
Individual parameters could (must?) still have their own priors. The sampler "grouper" should recognize the ParameterGroup
as a group.
@paulthebaker, do you want to have a go at implementing this?
sure
Please hold off fo a little since I'm refactoring the basic Parameter code (to allow even built-in parameters to take hyperparameters), and you should work from the new version.
Regarding parameter groups, the problem is to make sure that the rest of Enterprise knows how to handle them. The "params" dictionary that we pass to the likelihood components needs to contain individual parameters, even if they belong to a group. So it should be up to the "get_prior" function (currently in the PTA object) to recognize that and compute a single prior for them.
Yes, get_prior
should get all of the priors: individual and joint. That's why I think the individual params that make up the group should still have their own prior.
Well, if a, b, and c have the joint PDF p(a,b,c), then p(a) does not have its own pdf, other than the marginal \int p(a,b,c) db dc. So I think the individual params should defer (somehow) to the group prior.
BTW, we need an algorithm that does DAG-style evaluation for a list of parameters, starting with hyper parameters with no dependencies, and going down the graph. The same algorithm could presumably take care of parameter groups.
Also, pull request #119 now contains my modification to parameter.py
, which allows hyperparameters for Uniform
, Normal
, LinearExp
Parameters
, and in fact removes the dependence on prior.py
.
We need functionality for non-seperable priors. Currently we assume that p(a,b,c) = p(a)p(b)p(c) but that is not always the case.