davlars / ad-skull-reconstruction

Repository for reconstruction of simulated skull CT data for AD project
2 stars 5 forks source link

Error? with saveCont #3

Closed adler-j closed 7 years ago

adler-j commented 7 years ago

In several files we have stuff using saveCont that looks like

    niter = [int(i) for i in np.arange(5, 101, 5)]
    for iterations in niter:
        odl.solvers.conjugate_gradient_normal(A, x, rhs, niter=stepiter,
                                              callback=callbackPrintIter)
        # Store to disk

I assume this is intedended to save iterates nr 5, 10, 15, 20, etc, but since we restart the algorithm every 5:th iterate, the so called "10th" iteration is not the 10th iteration if we let conjugate_gradient_normal run its course. This is likely even more evident for the more advanced solvers that "build momentum" towards the optimum.

My suggestion is either to port CallbackSaveToDisk, or wait until it is merged into master and use it instead. This would also make the code much easier to read (simply append a CallbackSaveToDisk to the callback if you want to save results).

davlars commented 7 years ago

This might be an odl-related question but would we not save the 10th,15th,20th... etc. iteration if we're simply using the previous finished iteration as input x to our odl.solvers call? At least that's what the loop is doing now, but it might very well be incorrect in more advanced solver situations. I'm not sure myself...

adler-j commented 7 years ago

Assume I make a solver that does this:

def my_solver(f, x, niter):
    direction = x.space.zero()
    for i in range(niter):
        direction = 0.5 * direction + 0.5 * f.gradient(x)
        x -= direction

As you see from this example, running it for 10 iterates would give different results than 5 iterates then another 5 iterates (in fact, if you run it for 10 iterates you will move a total distance of about 9, while if you run 5 iterates twice you move a distance of about 8).

The above is basically the Momentum method, and basically all of our solvers (except gradient descent) performs tricks like this.

For the conjugate gradient solver this makes a notable difference since for a nxn array the CG method is guaranteed to converge exactly in n iterates, but if you keep interrupting it, this will never happen.

davlars commented 7 years ago

I agree totally. Thanks a lot for clarifying. Like you suggested, CallbackSaveToDisk might be the way to go (once it's merged). I'm anyhow removing the incorrect for iterations in iter ... from this code repo.

adler-j commented 7 years ago

I suppose this is solved now, so I'll close this.