odlgroup / odl

Operator Discretization Library https://odlgroup.github.io/odl/
Mozilla Public License 2.0
370 stars 105 forks source link

Make adjoints, gradients etc. correct for weighted spaces #1068

Open kohr-h opened 7 years ago

kohr-h commented 7 years ago

Currently weighting is in a somewhat pitiful state in ODL. We support it in principle, but basically none of the operators and functionals get it right.

Example:

>>> rn = odl.rn(3)
>>> rn_w = odl.rn(3, weighting=[1, 2, 2])
>>> l2norm_sq = odl.solvers.L2NormSquared(rn)
>>> l2norm_sq_w = odl.solvers.L2NormSquared(rn_w)
>>> grad_at_one = l2norm_sq.gradient(rn.one())
>>> grad_at_one
rn(3).element([2.0, 2.0, 2.0])
>>> grad_at_one_w = l2norm_sq_w.gradient(rn.one())
>>> grad_at_one_w
rn(3, weighting=[1, 2, 2]).element([2.0, 2.0, 2.0])
>>> 
>>> grad_at_one.inner(rn.one())
6.0
>>> # The above is the same as
>>> l2norm_sq.derivative(rn.one())(rn.one())
6.0

>>> # But it should be the same in weighted space since
>>> # the derivative is independent of the weighting
>>> l2norm_sq_w.gradient(rn_w.one()).inner(rn_w.one())
10.0

Similar situations occur with Operator.adjoint implementations. This clearly needs improvement (usually it's not hard).

kohr-h commented 7 years ago

This also applies to the newly added Operator.norm(), see #1067 (PR) and #1065 (issue).

adler-j commented 7 years ago

I guess the problem is with the power method more than Operator.norm, no?

kohr-h commented 7 years ago

It also affects the "exact" norm.