BAMresearch / bayem

Implementation and derivation of "Variational Bayesian inference for a nonlinear forward model." [Chappell et al. 2008] for arbitrary, user-defined model errors.
MIT License
2 stars 1 forks source link

Linearity #90

Closed TTitscher closed 2 years ago

TTitscher commented 2 years ago

My rough understanding is, that when the linearity assumption

k(theta) ~ k(m) + J(m) (theta - m)

does not hold, the priors and posteriors of the model parameters are no longer conjugate. And the more nonlinear the model is, the worse.

This PR provides the basic frame for measuring the nonlinearity in bayem.linearity_analysis(model, posterior,...). Currently, the actual model (error) and its linearization are evaluated a number of (posterior) standard deviations around the posterior mean, by default 7 times at µ + sd * (-3, -2, -1, 0, 1, 2, 3). This results in a matrix M of shape (N_model_error x 7) and the relative error

|| M_true - M_lin || / || M_lin ||

is used as a measure. The exact definition of this norm is user provided, but np.linalg.norm seems to work. If the model is a good linear fit (around the posterior mean), this value will go to zero.

The output is a nested dict{noise_group: {parameter: relative error}}. And the argument show=True provides a debug visualization.

TTitscher commented 2 years ago

You have some interesting working hours :stuck_out_tongue:

Thanks for having a look! I found all your comments/suggestions useful and added them. And I am sorry that the code formatting (of completely unrelated files) messed up the diff. Next time, I'll open a pure code formatting PR.