Closed joergfunger closed 3 years ago
I had similar problems when prescribing the prior distribution. From my point of view, we could think about the latent variable list actually having names (which could be different than the local ones in each model parameter list it refers to). When adding a variable into this list, you would have to specify either a starting value (for optimization) or a prior, but then this is not a prior for each individual parameter in the different model parameter lists, but it is one for the specific parameter defined as being latent.
The last idea of having the latent parameters mainly sorted/accessed by a name is explored in #10 and here. Seems like a good idea so far.
The code we were talking about is a bit outdated, but the issue itself still relevant. I propose to add a LatentParameter.set_value(self, value)
method, such that the example above would translate to
problem.latent["b"].set_value(init_value_b)
In the optimization procedure, I have to set initial values. Currently I have only a single value which makes it easy to do something like
For the more complex case with multiple experiments, I can set the local variables when creating the multimodelerror, e.g within the user-define derived class of the MultiModelError
For parameters that are global (shared by different models), it would be nice to set global latent variables jointly for all parameters (and outside of the definition of multimodel error)
I wanted to implement something similar, but I was not sure how exactly the interface should look like, because on the latent_parameter level I only have either the (potentially different) (name,key) for each individual index, or the index itself - both seems to me not intuitive if I want to set a named variable. I could also get the index in the global vector by
Any better idea on how this should be done?