Currently weighting is in a somewhat pitiful state in ODL. We support it in principle, but basically none of the operators and functionals get it right.
Example:
>>> rn = odl.rn(3)
>>> rn_w = odl.rn(3, weighting=[1, 2, 2])
>>> l2norm_sq = odl.solvers.L2NormSquared(rn)
>>> l2norm_sq_w = odl.solvers.L2NormSquared(rn_w)
>>> grad_at_one = l2norm_sq.gradient(rn.one())
>>> grad_at_one
rn(3).element([2.0, 2.0, 2.0])
>>> grad_at_one_w = l2norm_sq_w.gradient(rn.one())
>>> grad_at_one_w
rn(3, weighting=[1, 2, 2]).element([2.0, 2.0, 2.0])
>>>
>>> grad_at_one.inner(rn.one())
6.0
>>> # The above is the same as
>>> l2norm_sq.derivative(rn.one())(rn.one())
6.0
>>> # But it should be the same in weighted space since
>>> # the derivative is independent of the weighting
>>> l2norm_sq_w.gradient(rn_w.one()).inner(rn_w.one())
10.0
Similar situations occur with Operator.adjoint implementations. This clearly needs improvement (usually it's not hard).
Currently weighting is in a somewhat pitiful state in ODL. We support it in principle, but basically none of the operators and functionals get it right.
Example:
Similar situations occur with
Operator.adjoint
implementations. This clearly needs improvement (usually it's not hard).