comp-imaging / ProxImaL

A domain-specific language for image optimization.
MIT License
114 stars 29 forks source link

Unable to extract background field "w"; modified Horn-Shunck optical flow with norm1 #52

Closed jonpry closed 1 year ago

jonpry commented 3 years ago

Hi,

Using proximal, with an odl operator, I was able to implement robust horn-shunck optical flow in just a few lines of code. Amazing!

laplacian = odl.Laplacian(space)

# Convert laplacian to ProxImaL operator
proximal_lang_laplacian = odl.as_proximal_lang_operator(laplacian)

fx = np.gradient(image1)[1]
fy = np.gradient(image1)[0]
ft = image1 - image2

# Set up optimization problem
u = proximal.Variable(space.shape)
v = proximal.Variable(space.shape)
alpha = .07
funcs = [alpha * proximal.norm1(proximal_lang_laplacian(u)),
         alpha * proximal.norm1(proximal_lang_laplacian(v)),
         proximal.norm1(proximal.mul_elemwise(fx,u) + proximal.mul_elemwise(fy,v) - ft)]

This works great. However, I am trying to solve a slightly different problem:

laplacian = odl.Laplacian(space)

# Convert laplacian to ProxImaL operator
proximal_lang_laplacian = odl.as_proximal_lang_operator(laplacian)

fx = np.gradient(image1)[1]
fy = np.gradient(image1)[0]
ft = image1 - image2

# Set up optimization problem
u = proximal.Variable(space.shape)
v = proximal.Variable(space.shape)
w = proximal.Variable(space.shape)
alpha = .07
funcs = [alpha * proximal.norm1(proximal_lang_laplacian(u)),
         alpha * proximal.norm1(proximal_lang_laplacian(v)),
         alpha * proximal.norm1(proximal_lang_laplacian(w)),
         proximal.norm1(proximal.mul_elemwise(fx,u) + proximal.mul_elemwise(fy,v) + w - ft)]

In short I have added the free variable w. After doing this, the results don't make sense. The regularization parameter alpha makes no difference. 0 to 1e40 all give the same results. And without regularization, the problem is ill-posed. My first thought was that the solver is unhappy that u and v are scaled, while w is not, but performing a proximal.mul_elemwise(np.ones_like(fy),w) does not improve things. Any ideas?

SteveDiamond commented 3 years ago

I'm glad the optical flow worked well for you! I'm not sure what's causing your bug. Have you tried all the different solver algorithms (http://www.proximal-lang.org/en/latest/tutorial/index.html#solver-algorithms)?

Otherwise to debug we'd need a script we can run.

jonpry commented 3 years ago

Thank you for your quick response! Hopefully this example will be informative. I know that the proper solution to this is that u and v are zero everywhere, and w is .08 inside the circle, and 0 outside. Of course there will be distortions around the boundary. I have implemented an iterative solver for this problem, at least the l2 norm version of it, and it gets the right solution, but I can't seem to get it with Proximal for some reason.


import numpy as np
import odl
import proximal
import sys
import matplotlib.pyplot as plt

#draw circles on empty image
X,Y = np.mgrid[-50:50:100j, -50:50:100j]
image1 = np.zeros((100,100))
dp = np.sqrt(X**2 + Y**2)
image1[ X**2 + Y**2 < 35**2] = dp[X**2 + Y**2 < 35**2]

#second image is offset by .08
image2 = np.zeros((100,100))
image2[ X**2 + Y**2 < 35**2] = dp[X**2 + Y**2 < 35**2] + .08

#actual problem definition
space = odl.uniform_discr([0, 0], image1.shape, image1.shape)
laplacian = odl.Laplacian(space)
proximal_lang_laplacian = odl.as_proximal_lang_operator(laplacian)
fx = np.gradient(image1)[1]
fy = np.gradient(image1)[0]
ft = image1 - image2

# Set up optimization problem
u = proximal.Variable(space.shape)
v = proximal.Variable(space.shape)
w = proximal.Variable(space.shape)

alpha = .0001
funcs = [alpha * proximal.norm1(proximal_lang_laplacian(u)),
         alpha * proximal.norm1(proximal_lang_laplacian(v)),
         alpha * proximal.norm1(proximal_lang_laplacian(w)),
         proximal.norm1(proximal.mul_elemwise(fx,u) + proximal.mul_elemwise(fy,v) + w - ft)]

# Solve the problem using ProxImaL
prob = proximal.Problem(funcs)
prob.solve(verbose=True,solver='pc')

fig,ax = plt.subplots(1)    
ax.imshow(image1)

fig,ax = plt.subplots(1)    
ax.imshow(u.value)

fig,ax = plt.subplots(1)    
ax.imshow(v.value)

fig,ax = plt.subplots(1)    
ax.imshow(w.value)

plt.show()
antonysigma commented 3 years ago

Hi @jonpry ,

I was the previous contributor of the project, primarily reworked the Halide system.

I believe what you observed is likely the same issue I encountered at https://github.com/comp-imaging/ProxImaL/blob/master/proximal/tests/test_algs.py#L37-L40 I convinced @SteveDiamond to skip those test cases; the lack of global minimum didn't seem to bother the typical users. Specifically, those who will plot the TV-denoised image with imshow(), thus normalizing the colormap by default.

Could you please comment out those lines with @pytest.mark.skip(reason="algorithm failed to converge"), and see if you can pass those test cases on your machine?


The context:

Back then when I polished the test code at PR https://github.com/comp-imaging/ProxImaL/pull/49, I found that the Proximal framework failed to converge to the global minimum even with trivial test cases. Your issue seemed to match the patterns in the test cases in the links.

I have two hypotheses:

  1. There is a bug in the ADMM implementation, or
  2. ADMM algorithm has the intrinsic lack of convergence, when the signal fidelity term is regularized with more than one proximal functions.

Regards, Antony

SteveDiamond commented 3 years ago

Now I'm feeling a bit guilty. I can guarantee that the problem is 1 not 2. I'll take a look this weekend.

jonpry commented 3 years ago

The results of test_algs.py

FAILED test_algs.py::TestAlgs::test_admm_two_prox_fn - assert 0.78355053884267778 < 1e-05
FAILED test_algs.py::TestAlgs::test_admm_extra_param - assert 0.034965101711908197 < 0.02
FAILED test_algs.py::TestAlgs::test_pock_chambolle_cuda - ValueError: unknown type object
FAILED test_algs.py::TestAlgs::test_pock_chambolle_numpy - assert 0.78355053886597159 < 1e-05
FAILED test_algs.py::TestAlgs::test_half_quadratic_splitting_single_prox_fn - assert 1.0 < 1e-05
FAILED test_algs.py::TestAlgs::test_half_quadratic_splitting_two_prox_fn - assert 0.78355060324288039 < 0.001
FAILED test_algs.py::TestAlgs::test_lin_admm_single_prox_fn - assert 1.0 < 1e-05
FAILED test_algs.py::TestAlgs::test_lin_admm_two_prox_fn - assert 0.78355053884287829 < 1e-05
FAILED test_algs.py::TestAlgs::test_equil - assert 1.0 < 0.001
SteveDiamond commented 3 years ago

I investigated these bugs with git bisect, and the last working commit was 2804c19e141b637a5fcbf92acdcae81cab4c6d1e. @jonpry could you try the tests on that commit?

SteveDiamond commented 3 years ago

@antonysigma sorry I didn't do better due diligence on your much appreciated update to this somewhat neglected project. We'll sort out what's going on here.

jonpry commented 3 years ago

@SteveDiamond

I ported the head test_algs.py and ran it on the 2804c19e141b637a5fcbf92acdcae81cab4c6d1e tree. Results are:

FAILED test_algs.py::TestAlgs::test_admm_two_prox_fn - AssertionError: assert 0.78355053884267778 < 1e-05
FAILED test_algs.py::TestAlgs::test_equil - AssertionError: assert 1.0 < 0.001
FAILED test_algs.py::TestAlgs::test_half_quadratic_splitting_single_prox_fn - AssertionError: assert 1.0 < 1e-05
FAILED test_algs.py::TestAlgs::test_half_quadratic_splitting_two_prox_fn - AssertionError: assert 0.78355060324288039 < 0.001
FAILED test_algs.py::TestAlgs::test_lin_admm_single_prox_fn - AssertionError: assert 1.0 < 1e-05
FAILED test_algs.py::TestAlgs::test_lin_admm_two_prox_fn - AssertionError: assert 0.78355053884287829 < 1e-05
FAILED test_algs.py::TestAlgs::test_pock_chambolle_cuda - AssertionError: assert 0.7835505388655819 < 1e-05
FAILED test_algs.py::TestAlgs::test_pock_chambolle_numpy - AssertionError: assert 0.78355053886597159 < 1e-05

Qualitatively, it is a little different. Admm and PC are now giving similar results on my test script, and it feels 'closer' but something is still definitely wrong. It seems to me that for large values of alpha, the laplacian penalty should cause the solution to be equal everywhere, but for any alpha > 1, the solution is unchanged and not particularly smooth.

SteveDiamond commented 3 years ago

These results are very surprising for me, because all the test_algs tests pass on my machine for the 2804c19e141b637a5fcbf92acdcae81cab4c6d1e commit. I'm not really sure where to go from here.

SteveDiamond commented 3 years ago

Are you sure you're really running against that commit?

jonpry commented 3 years ago

I checked it out twice, removed everything from dist-packages and installed it.

These are the test files I modified: tests.zip

I am running python 3.9 under linux amd64. Could it be a problem with cvxpy?

SteveDiamond commented 3 years ago

Thanks @jonpry. Could you say more about why you modified tests? I ran the exact tests in the commit. I highly doubt the problem is with cvxpy but I guess anything is possible in theory.

jonpry commented 3 years ago

@SteveDiamond The tests in 2804c19e141b637a5fcbf92acdcae81cab4c6d1e are different than the tests in head. It would seem to me that if we are rolling back to see if it fixes the failure in head, we should not run a different set of tests. Running the tests in 2804c19e141b637a5fcbf92acdcae81cab4c6d1e did not exactly work for me. Newer cvxpy maybe, didn't like Variable(dim1,dim2) syntax. After modifying these to Variable((dim1,dim2)) it would run, and there were a some failures but not by much.

So then I tried to run the tests from head, and seems there are still significant convergence issues.

SteveDiamond commented 3 years ago

@jonpry I fear that the commits since 2804c19 have put everything in a somewhat broken state, so I wanted to check that if we went back to 2804c19 things still work. It sounds like they do? If so I would suggest proceeding with 2804c19 as your version of ProxImaL and we will take some time and get head sorted out.

jonpry commented 3 years ago

Even on commit 2804c19e141b637a5fcbf92acdcae81cab4c6d1e, this test in particular fails:

    def test_admm_two_prox_fn(self):
        """Test ADMM algorithm, two prox functions.
        """
        X = px.Variable((10, 5))
        B = np.reshape(np.arange(50), (10, 5)) * 1.

        prox_fns = [px.norm1(X), px.sum_squares(X, b=B)]
        sltn = admm.solve(prox_fns, [], 1.0, eps_rel=1e-5, eps_abs=1e-5)

        cvx_X = cvx.Variable((10, 5))
        cost = cvx.sum_squares(cvx_X - B) + cvx.norm(cvx_X, 1)
        prob = cvx.Problem(cvx.Minimize(cost))
        prob.solve()
        self.assertItemsAlmostEqual(X.value, cvx_X.value, eps=2e-1)
        self.assertAlmostEqual(sltn, prob.value)

        psi_fns, omega_fns = admm.partition(prox_fns)
        sltn = admm.solve(psi_fns, omega_fns, 1.0, eps_rel=1e-5, eps_abs=1e-5)
        self.assertItemsAlmostEqual(X.value, cvx_X.value, eps=2e-2)
        self.assertAlmostEqual(sltn, prob.value)

        prox_fns = [px.norm1(X)]
        quad_funcs = [px.sum_squares(X, b=B)]
        sltn = admm.solve(prox_fns, quad_funcs, 1.0, eps_rel=1e-5, eps_abs=1e-5)
        self.assertItemsAlmostEqual(X.value, cvx_X.value, eps=2e-2)
        self.assertAlmostEqual(sltn, prob.value)

with FAILED test_algs.py::TestAlgs::test_admm_two_prox_fn - AssertionError: assert 0.78355053884267778 < 1e-05 This seems like ADMM is not converging to me. I will try to go back further and find a commit that works.

antonysigma commented 3 years ago

I highly doubt the problem is with cvxpy but I guess anything is possible in theory.

@SteveDiamond , sorry to add more "noise" to the conversations. Perhaps what is revealed in test_algs.py is a separate issue originating from https://github.com/comp-imaging/ProxImaL/issues/15#issuecomment-239227226? It may or may not be relevant to what @jonpry is describing here.

I can fork the test_algs.py related discussion in a separate Github case.


Observations: code Snippet A passes verification, but not for Snippet B:

# Problem: arg min (x - 1)^2 + | x |, s.t. x is scalar. 
X = px.Variable((1, 1))
B = np.ones((1, 1), dtype=np.float32, order='F')

prox_fns = [px.norm1(X), px.sum_squares(X, b=B)]
sltn = admm.solve(prox_fns, [], 1.0, eps_rel=1e-5, eps_abs=1e-5)

cvx_X = cvx.Variable((1, 1))
cost = cvx.sum_squares(cvx_X - B) + cvx.norm(cvx_X, 1)
prob = cvx.Problem(cvx.Minimize(cost))
prob.solve()
print('admm solution = {}\ncvx solution = {}'.format(X.value, cvx_X.value))
# Both should equal to 0.5
# Problem: arg min ||x - 1||_2^2 + || x ||_1, s.t. x in set of 9 x 1 column real vector.
# Question: is the problem identical to \sum_{i = 1}^9 [ (x_i - 1)^2 + abs(x_i) ] ?
X = px.Variable((3, 3))
B = np.ones((3, 3), dtype=np.float32, order='F')

prox_fns = [px.norm1(X), px.sum_squares(X, b=B)]
sltn = admm.solve(prox_fns, [], 1.0, eps_rel=1e-5, eps_abs=1e-5)

cvx_X = cvx.Variable((3, 3))
cost = cvx.sum_squares(cvx_X - B) + cvx.norm(cvx_X, 1)
prob = cvx.Problem(cvx.Minimize(cost))
prob.solve()
print('admm = {}\ncvx = {}'.format(X.value, cvx_X.value))
# proximal reports solution x_{i,j} = 0.5;
# However, cvx solver reports solution x_{i,j} = 0.8333...
antonysigma commented 1 year ago

I know that the proper solution to this is that u and v are zero everywhere, and w is .08 inside the circle, and 0 outside.

Hi @jonpry , are you still working on this optical flow issue with the regularization parameter w? I fixed quite a number of bugs in the ADMM algorithm, so you may want to try again.

Did you say we expect w to be -0.08 inside the circle, and zeros outside? This is what I found on my end, closely matching your descriptions:

image

antonysigma commented 1 year ago

Optical flow example script is added to the examples/ folder. Closing for inactivity.