Closed jeverink closed 6 days ago
Thanks for the feedback @nabriis ,
The conditioning of the regularized Gaussian distributions is still a little bit of a mystery to me, but it all does seem to work. I hope you have a better understanding of it to know how to properly improve it.
Thanks for the feedback @amal-ghamdi , regarding the possible unification of the penalty terms, I think that is a great idea to streamline the features, I have added the issue #584.
Also to @nabriis , I have now added a general issue #585 with a bit of a roadmap of the upcoming/planned features that will hopefully follow from this PR.
@jeverink 🎉
Adds Total Variation regularization as option to the regularized Gaussian distribution.
PR features:
Regarding tests: I propose adding a small regression test of a full hierarchical model with a TV regularized model as part of the next PR, as many smaller things might change rapidly in future smalles PRs on the Regularized Gaussian Distribution.
Example results:
` n = 128 A, y_data, info = Deconvolution1D(dim=n, phantom='square').get_components()
x = RegularizedGaussian(0.5*np.ones(n), 0.1, regularization = "tv", strength = 10) y = Gaussian(A@x, 0.001)
joint = JointDistribution(x, y) posterior = joint(y=y_data)
sampler = RegularizedLinearRTO(posterior, maxit=50, penalty_parameter = 10) sampler.sample(100, 10)
samples = sampler.get_samples() `
` M = 100 A, y_data, info = Deconvolution2D(dim=M, phantom="cookie").get_components()
x = RegularizedGMRF(0.0*np.ones(M**2), prec = lambda d:d, regularization = "tv", strength = lambda d:d, geometry = A.domain_geometry) x = x(d = 10) y = Gaussian(A@x, 0.001)
joint = JointDistribution(x, y) posterior = joint(y=y_data)
sampler = RegularizedLinearRTO(posterior, maxit=25, penalty_parameter = 10) sampler.sample(100, 10)
samples = sampler.get_samples() `