benchopt / benchmark_tv_1d

TV Denoising in 1D
2 stars 7 forks source link

Chambolle-Pock on higher dual #12

Closed svaiter closed 2 years ago

svaiter commented 2 years ago

Use the alternative dualization discussed in PR https://github.com/benchopt/benchmark_tv_1d/pull/8#issuecomment-1122473250 and issue https://github.com/benchopt/benchmark_tv_1d/issues/10

agramfort commented 2 years ago

@svaiter I gave you write access to the repo so the CI will be triggered automatically 🙏

agramfort commented 2 years ago

@svaiter check the CIs including flake8 issues. 🙏

svaiter commented 2 years ago

I have an issue with benchopt on my macbook m1 (arm64), so I am not really able to test it properly (this is mostly a guess game with a script where I monitor myself the values). I will test my code ASAP on x86 arch or if someone can do it before, it would be great.

EnLAI111 commented 2 years ago

I have an issue with benchopt on my macbook m1 (arm64), so I am not really able to test it properly (this is mostly a guess game with a script where I monitor myself the values). I will test my code ASAP on x86 arch or if someone can do it before, it would be great.

Yes! It works!

image

agramfort commented 2 years ago

but it seems super slow. The test failed after a timeout after 5h. Is it related to this solver?

EnLAI111 commented 2 years ago

@svaiter Maybe you have already noticed that the value of F - F* will increase in the beginning. In fact it's the case of all my algorithms which use the dual variable. Thomas thought that it may be caused by an unsuitable initialization of dual variation, since we set the dual variation as np.zeros(y.shape[0]). So according to a relationship between the primal solution and the dual solution, u = np.linalg.pinv(A.T @ A) @ (A.T @ y - D @ v), I have tried to set v = pinv(D.T) @ (y - A.T @ A @ u) with u = c * self.ones(y.shape[0]) or u = zeros(y.shape[0]), for the dual initialization. However, the F - F* always increase in the begnnning. Do you have some ideas for the dual initialization?

josephsalmon commented 2 years ago

Can you add the dual objective to double check monotonicity?

svaiter commented 2 years ago

but it seems super slow. The test failed after a timeout after 5h. Is it related to this solver?

It is super slow for several reasons: ratio = 1.0 is dumb I think (should be ~10), primal-dual is slow with respect to a direct solver like prox-tv, but 5h is defintely weird...

@svaiter Maybe you have already noticed that the value of F - F* will increase in the beginning. In fact it's the case of all my algorithms which use the dual variable. Thomas thought that it may be caused by an unsuitable initialization of dual variation, since we set the dual variation as np.zeros(y.shape[0]). So according to a relationship between the primal solution and the dual solution, u = np.linalg.pinv(A.T @ A) @ (A.T @ y - D @ v), I have tried to set v = pinv(D.T) @ (y - A.T @ A @ u) with u = c * self.ones(y.shape[0]) or u = zeros(y.shape[0]), for the dual initialization. However, the F - F* always increase in the begnnning. Do you have some ideas for the dual initialization?

Yes, there is something to understand here. I just checked some old cuda code and papers by CP, I think that there is no particular strategy for the dual variable initialization in the litterature. But it is defintely something to optimize.

Can you add the dual objective to double check monotonicity?

One has to be careful because the primal-dual gap is monotone for PDHG only on the averaging, so in particular not necessarly for the first iterates

EnLAI111 commented 2 years ago

but it seems super slow. The test failed after a timeout after 5h. Is it related to this solver?

It is super slow for several reasons: ratio = 1.0 is dumb I think (should be ~10), primal-dual is slow with respect to a direct solver like prox-tv, but 5h is defintely weird...

I ran the lastest version, but it seems like when ratio = 1.0 it's the fastest.

image