Closed jeverink closed 5 months ago
Hi @nabriis , I've added an explanation as to why I have designed the RegularizedUniform class as such above, hope it helps.
PS, should this PR remain a draft or be changed?
Thanks for the comment @jakobsj , I have now updated all the documentation and added some tests.
I have now renamed RegularizedUniform to RegularizedUnboundedUniform
Thanks @jeverink. I've rerequested review from Nicolai.
Closes #436
Sub PR from #424
Explanation of RegularizedUniform: Consider a regularized Gaussian posterior of the form: $\min{x} \text{likelihood}(x, \hat{b}) + \frac12 ||x - \hat{\mu}||{\Sigma^{-1}}^2 + f(x) $. In various cases, we might not want the randomized linear least squares term from the RegularizedGaussian prior, but only the penalty $f(x)$. To make the randomized prior term disappear, we let $\Sigma^{-1} = 0$, or more precisely, $\Sigma^{-\frac{1}{2}} = 0$, such that the posterior takes the form:
$\min_{x} \text{likelihood}(x, \hat{b}) + f(x) $.
This, the RegularizedUniform can be interpreted as a RegularizedGaussian with infinitely flat underlying Gaussian, i.e., a uniform "distribution" on the unbounded parameter space.
By letting the RegularizedUniform inherit from RegularizedGaussian, set the sqrtprec to zero, RegularizedLinearRTO will accept this implicit prior without modification.
To-do: