BAMresearch / bayem

Implementation and derivation of "Variational Bayesian inference for a nonlinear forward model." [Chappell et al. 2008] for arbitrary, user-defined model errors.
MIT License
2 stars 1 forks source link

Update single latent variable #7

Closed joergfunger closed 3 years ago

joergfunger commented 3 years ago

In the optimization procedure, I have to set initial values. Currently I have only a single value which makes it easy to do something like

    start_vector = np.array([0.7])
    result = least_squares(all_experiments_model_error, start_vector)

For the more complex case with multiple experiments, I can set the local variables when creating the multimodelerror, e.g within the user-define derived class of the MultiModelError

   single_model_error = LinearModelError(some_prm)
   parameter = single_model_error.get_parameter_dict()
   parameter.define("b", init_value_b)
   self.add(single_model_error, parameter)

For parameters that are global (shared by different models), it would be nice to set global latent variables jointly for all parameters (and outside of the definition of multimodel error)

    multi_model_error.add_by_name('b')
    multi_model_error.update('b', init_value_b)

I wanted to implement something similar, but I was not sure how exactly the interface should look like, because on the latent_parameter level I only have either the (potentially different) (name,key) for each individual index, or the index itself - both seems to me not intuitive if I want to set a named variable. I could also get the index in the global vector by

latent_parameters.index_of('b', one_model_that_has_b)]

Any better idea on how this should be done?

joergfunger commented 3 years ago

I had similar problems when prescribing the prior distribution. From my point of view, we could think about the latent variable list actually having names (which could be different than the local ones in each model parameter list it refers to). When adding a variable into this list, you would have to specify either a starting value (for optimization) or a prior, but then this is not a prior for each individual parameter in the different model parameter lists, but it is one for the specific parameter defined as being latent.

TTitscher commented 3 years ago

The last idea of having the latent parameters mainly sorted/accessed by a name is explored in #10 and here. Seems like a good idea so far.

TTitscher commented 3 years ago

The code we were talking about is a bit outdated, but the issue itself still relevant. I propose to add a LatentParameter.set_value(self, value) method, such that the example above would translate to

problem.latent["b"].set_value(init_value_b)