Closed carterbox closed 3 years ago
Hello @carterbox! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers:
@carterbox, first we need to handle https://github.com/tomography/tike/issues/52, i.e. separate grad and cost functions from operators. Then it becomes clear that different line search implementations can also be separated. Instead of giving them the cost function, we can send the operator. We may have line_search(fwd,...), line_search_sqr(fwd,...), etc. At the beginning of such line search function we precompute temporarily variable, like p1=fwd(psi), p2=fwd(dpsi),...etc. and then use them to find direction
@nikitinvv I don't understand your idea fully. The cost and gradient are paired, but the operator and gradient are not? So it doesn't make sense to provide an operator and gradient. Please provide a full example API of what you are proposing.
@carterbox ok my point is that the line search should be defined at the same place as the cost and grad functions because its optimal calculation depends on the cost function and operators. So there are two ways. Either you have the line search as a method in each operator (like it is now with cost and grad) or have all functions: grad,cost,line search in opt.py where they should take operators as a parameter.
The problem with moving ptychography cost and grad functions to tike.opt
is that the ptychography forward operator is not the forward model because
Unless you can think of a graceful way to encapsulate these irregularities into the opt interface (?), we should add a linesearch
method to the Ptycho operator (if you think it is going to provide speedups of more than 10x).
I guess this is the kind of irregularity that causes that both scipy.optimize and ODL both use the cost/grad interface (with optional line search).
@carterbox ok, that sounds reasonable. In my opinion, for consistency with definitions of cost and grad functions, we should have line search functions also as a part of operators? Then cg solver will take cost, grad, and line search functions as parameters. Up to you
@carterbox Have you observed very often changes in gamma step sizes during iterations? Are they stabilizing after a certain number of iterations, or we have something like 0.5,0.25,0.5, 1e-7,.0.5? If they are not changed much then we can try to avoid the line search
Then cg solver will take cost, grad, and line search functions as parameters
Yes, this is the approach that seems to make the most sense. I'm just afraid of complexity. 😨
Have you observed very often changes in gamma step sizes during iterations?
I stopped monitoring this parameter after I thought the gradients were correct. Should this parameter go to the INFO logger or the DEBUG logger?
For @nikitinvv 's derivation, the linear term of the quadratic is:
sum_j ( G(x_j).real * G(x_j).real + 2 * G(d_j).imag * G(d_j).imag )
But for mine, the linear term is:
2 * sum_j( G(x_j).real * G(d_j).real + G(x_j).imag * G(d_j).imag )
I think they are not equivalent and mine is correct. Full derivation here.
(tike) bash-4.2$ python tike-recon.py catalyst combined 1 --folder output/line
scan range is (0, 651.836181640625), (0, 1135.3802490234375).
scan positions are (1, 1848, 2), float32
data is (1, 1848, 128, 128), float32
probe is (1, 1, 1, 7, 128, 128), complex64
2020-05-21 20:51:00 INFO combined for 1,848 - 128 by 128 frames for 10 iterations.
2020-05-21 20:51:00 INFO object and probe rescaled by 41.459629
2020-05-21 20:51:01 DEBUG step 0; length = 1.526e-05; cost = 1.869670e+10
2020-05-21 20:51:01 DEBUG step 1; length = 1.526e-05; cost = 1.713484e+10
2020-05-21 20:51:02 DEBUG step 2; length = 3.052e-05; cost = 1.697497e+10
2020-05-21 20:51:03 DEBUG step 3; length = 3.052e-05; cost = 1.697411e+10
2020-05-21 20:51:03 INFO object cost is +1.69741e+10
2020-05-21 20:51:03 DEBUG step 0; length = 1.000e+00; cost = 5.643800e+09
2020-05-21 20:51:03 DEBUG step 1; length = 1.000e+00; cost = 3.995764e+09
2020-05-21 20:51:03 DEBUG step 2; length = 1.000e+00; cost = 3.782804e+09
2020-05-21 20:51:03 DEBUG step 3; length = 1.000e+00; cost = 3.710062e+09
2020-05-21 20:51:03 INFO probe cost is +3.71006e+09
For the catalyst dataset, it seems like the initial step length for the probe is too small because it is always 1e0
and for the object, it is too large because it consistently shrinks to 1e-5
(approximately 16 backtracks). It may be worth our time to implement something that automatically adjusts the initial step size for the line search.
For typical optimization problems we wouldn't need this, but we are constantly switching between many sub-problems.
For @nikitinvv 's derivation, the linear term of the quadratic is:
sum_j ( G(x_j).real * G(x_j).real + 2 * G(d_j).imag * G(d_j).imag )
But for mine, the linear term is:
2 * sum_j( G(x_j).real * G(d_j).real + G(x_j).imag * G(d_j).imag )
I think they are not equivalent and mine is correct. Full derivation here.
sure, I just forgot to write 2, from my code:
p1 = data*0
p2 = data*0
p3 = data*0
for k in range(probe.shape[1]):
tmp1 = self.fwd(psi, scan, probe[:, k])
p1 += cp.abs(tmp1)**2
tmp1 = self.fwd(psi, scan, probe[:, m])
tmp2 = self.fwd(psi, scan, dprb[:, m])
p2 = cp.abs(tmp2)**2
p3 = 2*(tmp1.real*tmp2.real+tmp1.imag*tmp2.imag)
(tike) bash-4.2$ python tike-recon.py catalyst combined 1 --folder output/line scan range is (0, 651.836181640625), (0, 1135.3802490234375). scan positions are (1, 1848, 2), float32 data is (1, 1848, 128, 128), float32 probe is (1, 1, 1, 7, 128, 128), complex64 2020-05-21 20:51:00 INFO combined for 1,848 - 128 by 128 frames for 10 iterations. 2020-05-21 20:51:00 INFO object and probe rescaled by 41.459629 2020-05-21 20:51:01 DEBUG step 0; length = 1.526e-05; cost = 1.869670e+10 2020-05-21 20:51:01 DEBUG step 1; length = 1.526e-05; cost = 1.713484e+10 2020-05-21 20:51:02 DEBUG step 2; length = 3.052e-05; cost = 1.697497e+10 2020-05-21 20:51:03 DEBUG step 3; length = 3.052e-05; cost = 1.697411e+10 2020-05-21 20:51:03 INFO object cost is +1.69741e+10 2020-05-21 20:51:03 DEBUG step 0; length = 1.000e+00; cost = 5.643800e+09 2020-05-21 20:51:03 DEBUG step 1; length = 1.000e+00; cost = 3.995764e+09 2020-05-21 20:51:03 DEBUG step 2; length = 1.000e+00; cost = 3.782804e+09 2020-05-21 20:51:03 DEBUG step 3; length = 1.000e+00; cost = 3.710062e+09 2020-05-21 20:51:03 INFO probe cost is +3.71006e+09
For the catalyst dataset, it seems like the initial step length for the probe is too small because it is always
1e0
and for the object, it is too large because it consistently shrinks to1e-5
(approximately 16 backtracks). It may be worth our time to implement something that automatically adjusts the initial step size for the line search.For typical optimization problems we wouldn't need this, but we are constantly switching between many sub-problems.
For having normal gamma steps you should find a constant r, s.t. rR^HRf~(of the order)f, i.e. r \approx<R^HRf,f>/<R^HRf,R^HRf>. Then add this r to the gradient step: /gamma rR^H(Rf-g) ~ f, i.e. here gamma should not be too low. Typically, this constant should only depend on data sizes. I always find it experimentally (varying sizes n, ntheta,nscan.. and see dependence). For example, in laminography I have smth like 1/ntheta/n/n. See 'Norm test' in https://github.com/nikitinvv/lamcg/blob/master/tests/test_adjoint.py. I remember I have also played with it in ptychography.
Waiting for #67 to be completed.
Closing this because I think tuning the step size so line searches are avoided or minimal is enough.
Purpose
Related to #60. In this PR, I'm trying to reduce the time spent on the backtracking line search in exchange for memory.
Approach
Our Operators are linear operators e.g.
F(a + c * b) = F(a) + c * F(b)
where c is a scalar, so we can speed up the line search by cachingF(x)
andF(d)
whered
is the search direction andF()
is the function being minimized.General
This might work quite well if the ptychography cost functions easily split into linear and non-linear parts. The cost function for a single mode could be broken this way:
However, when we have probe with multiple-incoherent modes, then life becomes difficult. :cry:
Ptycho-specific
We could write a specific line searcher for ptychography by refactoring the intensity sum as follows:
where
a = fwd(x)
,b = fwd(d)
, andc = step
. This allows us to cache three arrays and change the step size without calling the forward operator every time. I guess this is fine because the code is modular.Pre-Merge Checklists
Submitter
yapf
to format python code.Reviewer