Closed adler-j closed 7 years ago
I don't know if I'm doing something wrong, but when I tried the above code I get the following error
>>> prob.solve(verbose=True, max_iters=2000)
Estimated params [sigma = 1.000 | tau = 1.000 | theta = 1.000 | L_est = 1.0000]
Prox NLM params are: [sigma =nan prior=0 sigma_scale=6.0]
OpenCV Error: Assertion failed (almost_dist2weight_[0] == fixed_point_mult_) in FastNlMeansDenoisingInvoker, file -------src-dir-------/opencv-2.4.10/modules/photo/src/fast_nlmeans_denoising_invoker.hpp, line 150
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/github/ProxImaL/proximal/algorithms/problem.py", line 136, in solve
*args, **kwargs)
File "/github/ProxImaL/proximal/algorithms/pock_chambolle.py", line 128, in solve
lin_solver=lin_solver, options=lin_solver_options).flatten()
File "/github/ProxImaL/proximal/prox_fns/prox_fn.py", line 84, in prox
xhat = self._prox(rho_hat, v, *args, **kwargs)
File "/github/ProxImaL/proximal/prox_fns/patch_NLM.py", line 96, in _prox
self.searchWindowSizeNLM)
cv2.error: -------src-dir-------/opencv-2.4.10/modules/photo/src/fast_nlmeans_denoising_invoker.hpp:150: error: (-215) almost_dist2weight_[0] == fixed_point_mult_ in function FastNlMeansDenoisingInvoker
Before running I just pulled the last versions of both ODL and ProxImaL, so I have the latest version from the ODL branch (i.e., commit 14ccca7) and the latest commit in ProxImaL (i.e., commit a9bec9e)
I'm running Ubuntu 14.04, Python version 2.7.12, and opencv version 2.4.10.
This will sound very strange, with the use of norm_bound
and show
...
I went back to the the Laplace example in ODL. There you basically have the following example
import numpy as np
import odl
import proximal
# Create space, a square from [0, 0] to [100, 100] with (100 x 100) points
space = odl.uniform_discr([0, 0], [100, 100], [100, 100])
# Create odl operator for laplacian
laplacian = odl.Laplacian(space)
# Create right hand side
rhs = laplacian(odl.phantom.shepp_logan(space, modified=True))
rhs += odl.phantom.white_noise(space) * np.std(rhs) * 0.0 # disable noise
rhs.show('noisy data')
# Convert laplacian to cvx operator
norm_bound = odl.power_method_opnorm(laplacian, 10, rhs)
cvx_laplacian = odl.operator.oputils.as_cvx_operator(laplacian,
norm_bound=norm_bound)
# Convert to array
rhs_arr = rhs.asarray()
# Set up optimization problem
x = proximal.Variable(space.shape)
funcs = [10 * proximal.sum_squares(cvx_laplacian(x) - rhs_arr),
proximal.norm1(proximal.grad(x))]
prob = proximal.Problem(funcs)
# Solve the problem
prob.solve(verbose=True, solver='pc', max_iters=1000)
# Convert back to odl and display result
result_odl = space.element(x.value)
result_odl.show('result')
When I run this exact code above, it gives output from ProxImaL on the form:
Estimated params [sigma = 1.000 | tau = 1.000 | theta = 1.000 | L_est = 1.0000]
iter 1: ||r||_2 = nan, eps_pri = nan, ||s||_2 = nan, eps_dual = nan
iter 2: ||r||_2 = nan, eps_pri = nan, ||s||_2 = nan, eps_dual = nan
iter 3: ||r||_2 = nan, eps_pri = nan, ||s||_2 = nan, eps_dual = nan
iter 4: ||r||_2 = nan, eps_pri = nan, ||s||_2 = nan, eps_dual = nan
iter 5: ||r||_2 = nan, eps_pri = nan, ||s||_2 = nan, eps_dual = nan
iter 6: ||r||_2 = nan, eps_pri = nan, ||s||_2 = nan, eps_dual = nan
and after 1000 iterations the variable is full of nan
. However, if I remove the line rhs.show('noisy data')
, the output looks like
Estimated params [sigma = 1.000 | tau = 1.000 | theta = 1.000 | L_est = 1.0000]
iter 1: ||r||_2 = 10.380, eps_pri = 0.188, ||s||_2 = 0.030, eps_dual = 0.105
iter 2: ||r||_2 = 8.084, eps_pri = 0.188, ||s||_2 = 0.014, eps_dual = 0.105
iter 3: ||r||_2 = 7.428, eps_pri = 0.188, ||s||_2 = 0.010, eps_dual = 0.105
iter 4: ||r||_2 = 7.460, eps_pri = 0.188, ||s||_2 = 0.009, eps_dual = 0.104
iter 5: ||r||_2 = 7.519, eps_pri = 0.189, ||s||_2 = 0.009, eps_dual = 0.103
iter 6: ||r||_2 = 7.450, eps_pri = 0.189, ||s||_2 = 0.009, eps_dual = 0.103
and after 1000 iterations it terminates and gives a perfectly fine reconstruction. Now if I put the line rhs.show('noisy data')
back, but remove the norm estimation, i.e., change the creation of the proximal_lang
operator to cvx_laplacian = odl.operator.oputils.as_cvx_operator(laplacian)
, I get the output
Estimated params [sigma = 1.000 | tau = 1.000 | theta = 1.000 | L_est = 1.0000]
iter 1: ||r||_2 = 10.397, eps_pri = 0.188, ||s||_2 = 0.025, eps_dual = 0.105
iter 2: ||r||_2 = 8.072, eps_pri = 0.188, ||s||_2 = 0.013, eps_dual = 0.105
iter 3: ||r||_2 = 7.353, eps_pri = 0.188, ||s||_2 = 0.010, eps_dual = 0.105
iter 4: ||r||_2 = 7.365, eps_pri = 0.188, ||s||_2 = 0.009, eps_dual = 0.104
iter 5: ||r||_2 = 7.445, eps_pri = 0.189, ||s||_2 = 0.009, eps_dual = 0.103
iter 6: ||r||_2 = 7.397, eps_pri = 0.189, ||s||_2 = 0.009, eps_dual = 0.103
and again after 1000 iterations it gives a perfectly fine reconstruction.
Just to make sure that I'm not doing anything obvious wrong: I run it on python consoles in spyder, version 2.3.9, and after each time a ran a file it closed the corresponding console and open a new one (in order to not have anything lying around in memory that could affect the outcome).
This might be a spyder-related issue. When I run the code in the standard Konsole everything runs nicely. @adler-j are you running the original issue-code in spyder? Try to remove the show
statement.
Removing the show statement, or running it in the Konsole, I don't get the crash that I first reported... I seem to have an issue with spyder and graphics.
Not able to test this atm, but I am not getting NaN as an answer, instead only getting a typically divergent sequence. So I dont think that is the issue.
Regarding getting NaN when using show
, this is possibly because calling show
causes matplotlib to be imported in ODL. You could try changing rhs.show()
to import matplotlib.pyplot as plt
and see if the issue persists.
Edit: Regarding the OpenCV error I'm not getting it, and since I'm behind the great firewall of china I cant help you debug it. It does however look like a separate iasue.
The whole matplotlib in spyder issue is pretty bizarre. I wanted to add though that patch_NLM
is nonconvex, so for practical applications you should specify an initial point x0
via prob.solve(x0=...)
.
The whole matplotlib in spyder issue is pretty bizarre.
Certainly agree. Interestingly we found a related yet very similar bug in the astra-toolbox package earlier this year. That time creating a plot changed the behaviour of the rest of the code. May be related?
I wanted to add though that patch_NLM is nonconvex, so for practical applications you should specify an initial point x0 via
prob.solve(x0=...)
What is the initial guess otherwise?
The default initial guess is x0 = 0.
On Mon, Aug 8, 2016 at 11:33 AM Jonas Adler notifications@github.com wrote:
The whole matplotlib in spyder issue is pretty bizarre.
Certainly agree. Interestingly we found a related yet very similar bug in the astra-toolbox package https://github.com/astra-toolbox/astra-toolbox/issues/16 earlier this year. That time creating a plot changed the behaviour of the rest of the code. May be related?
I wanted to add though that patch_NLM is nonconvex, so for practical applications you should specify an initial point x0 via prob.solve(x0=...)
What is the initial guess otherwise?
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/comp-imaging/ProxImaL/issues/10#issuecomment-238332941, or mute the thread https://github.com/notifications/unsubscribe-auth/AAfyIKO9WKT_Ccaf1jwd4Q0ChmQimzm2ks5qd3ZTgaJpZM4JdB43 .
I've been trying to use
patch_NLM
with a nontrivial data discrepancy term, but my results seem to always diverge. The call is basically, using this ODL branch:Which gives utter nonsense as a result, and it does not seem to converge when
max_iters
is increased, nor does overestimating thenorm_bound
help. Changing the scaling term0.01´ in front of
patch_NLM` seems to have only minor impact.It seems I am missing something, so adding an example of using this regularizer would be great!