Closed shnaqvi closed 1 year ago
The ADMM penalty parameter is too small. Try setting it significantly larger. Setting rho = 1e0
would be a good place to start, but you'll need to experiment to find the value that gives the best convergence.
Thanks @bwohlberg , I tried various values of rho, 1, 10, 1000, and by 1000, I get reasonable image recovery, however, the solver doesn't seem to apply the TV regularizer norm1(gradient())
, no matter what value of lambda I try, 1, 10, 100. I was expecting higher contrast in the recovered image, with sharp edges and blocky content. It seems to me that I haven't setup the solver properly to do the TV regularization. It taken norm1
in g_list
and FiniteDifference
in C_list
but does it actually nest them inside the function?
Do you know how to get the desired deblur results?
The result looks reasonable to me. Try lbd = 5e-1
, rho = 5e0
, and maxiter = 50
.
Wow this worked beautifully @bwohlberg. Would you help me understand the role rho
is playing here? I'm curious what symptoms were you seeing that motivated you to use these combinations. Can I change rho
and lambda
so as to help it converge quicker?
lambda
determines the strength of the regularization in the functional you want to minimize, and rho
plays a major role in the convergence of the ADMM algorithm to minimize that functional, so in principle, at least, you should first choose lambda
for best results and then choose rho
for best convergence. Convergence will be very slow if rho
is both too small or too big, but it's often difficult to know in advance what the right choice is. For problems like this, the best choice for rho
is typically somewhere between 10 times and 100 times lambda
.
I've setup synthetic image and blurred it with an anisotropic Gaussian kernel. I started off with using simple
ADMM
following the example here to solve the inverse problem. However, solver rapidly diverges to inf. within a few iterations. Can you please check my code below and see if there is anything obviously wrong with the setup of the problem? Also, how do we nest two operators, sayFiniteDifference
andL21Norm
to get the TV loss?P.S. I'm on M1 Mac on Python 3.10.1