odlgroup / odl

Operator Discretization Library https://odlgroup.github.io/odl/
Mozilla Public License 2.0
370 stars 105 forks source link

Resampling operator #327

Closed adler-j closed 8 years ago

adler-j commented 8 years ago

We currently have very good functionality for resampling vectors to different grids, namely by:

out.restriction(in.extension)

We should formalize this by adding something like the following to default_ops

class Resampling(odl.Operator):
    def __init__(self, domain, range):
        super().__init__(domain, range, True)

    def _call(self, x, out):
        out.restriction(x.extension)

    @property
    def inverse(self):
        # not truly an inverse, but ok?
        return Resampling(self.range, self.domain)

    @property
    def adjoint(self):
        return self.inverse

this is obviously strongly related to #59, but could perhaps be implemented separately without all of the issues related to that issue (correct adjoints etc).

kohr-h commented 8 years ago

This suggestion ships nicely around the issues in #59 and the conceptual issue of re-discretizations. We can just say that resampling means rediscretizing in the same interpolation scheme.

The adjoint could be wrong, however, at least for irregular grids. Checking that.

kohr-h commented 8 years ago

Yep. For regular grids, the adjoint needs to be rescaled by range.cell_volume / domain.cell_volume, and for irregular grids, the correct adjoint is (written pseudo-codishly)

def adjoint(x, out):
    scaled = x * range.cell_sizes
    out.restriction(scaled.extension)
    out /= domain.cell_sizes
adler-j commented 8 years ago

I disagree with the rescaling, we already to that in the inner products. Proof of concept:

X = odl.uniform_discr(0, 1, 10)
Y = odl.uniform_discr(0, 1, 50)

x = X.one()
y = Y.one()

A = Resampling(X, Y)

print(A(x).inner(y))  # prints 1.0
print(x.inner(A.adjoint(y)))  # prints 1.0

Edit: Running diagnostics I get the following info

== Verifying adjoint of operator ==

Domain and range of adjoint is OK.

Verifying the identity (Ax, y) = (x, A^T y)
x=One                       y=Cube                      : error=0.05345
x=Step                      y=Cube                      : error=0.03780
x=Cube                      y=Cube                      : error=0.06901
x=Cube                      y=Gaussian                  : error=0.00078
x=Gaussian                  y=Cube                      : error=0.00609
x=Gaussian                  y=Gaussian                  : error=0.01896
x=grad 0                    y=Cube                      : error=0.04506
error = ||(Ax, y) - (x, A^T y)|| / ||A|| ||x|| ||y||
*** FAILED 7 TEST CASE(S) ***

The adjoint seems to be scaled according to:
(x, A^T y) / (Ax, y) = 1.01353215717. Should be 1.0

The error stems from the extension and restriction not being exact inverses of each other (we miss resolution on the edges).

kohr-h commented 8 years ago

Hm, yes, I made a mistake in the interpolation kernels, those need to be rescaled, too, and that could very well cancel the global scaling effect. What happens with non-constant functions? Linear interpolation? I don't think we get out of there that cheaply.

adler-j commented 8 years ago

If we wanted to be proper we would need a "correct" extension and restriction (aka take mean over all values in the corresponding other volume), currently we only do point evaluation.

kohr-h commented 8 years ago

And we probably never will because it's too expensive and too unstable. You basically would have to compute the dual basis to the interpolation basis and integrate against that to get the correct restriction operator. It basically amounts to compute the inverse of a tridiagonal matrix for each axis to get the coefficients of the dual basis in the interpolation basis. It worked okay for regular grids - you basically get a narrow convolution kernel instead of a Dirac delta - but for irregular grids, it depended strongly on the distribution of the points. Probably one can get a stability estimate in terms of the separation distance or something like that. I found it too unstable and not worth exploring further.

Anyway, if we call that guy up there adjoint, we should at least say that it's only an approximation, but a good one. Same for the inverse, although that is a bit special because the composition of a re-sampling with its "inverse" is a projector. Probably it's the pseudo-inverse.