Open fzimmermann89 opened 1 week ago
@ckolbPTB
I removed all typing information from init_subclass. Myp will most likely never be able to check inside the autograd wrapper anyways and if I am not mistaken,currently mypy will works as-if there was no adjoint wrapper. The forward, the adjoint and all calls of these will still be type checked.
Coverage Report
Tests | Skipped | Failures | Errors | Time |
---|---|---|---|---|
828 | 0 :zzz: | 0 :x: | 0 :fire: | 1m 7s :stopwatch: |
There is a fundamental issue with using the adjoint as the backward:
During the forward inside the wrapper, autograd is disabled. Thus gradients from any parameters of self do not flow to the output. This means, looking at for example the gridsampleop, the gradient wrt x of the operator will work, the gradient wrt the grid will not work.
this is difficult to solve without using a custom solution for each linearop.
We might be able to use it for operators that do not depend on any other tensor. But most of our operators will then not be covered by the solution:
no idea how to pursue.
maybe we just do a custom autograd for the fourier operator?
@schuenke @ckolbPTB
There is a fundamental issue with using the adjoint as the backward:
That is a shame!
maybe we just do a custom autograd for the fourier operator?
Can we provide an custom gradient wrt to x but use pytorch autograd wrt to e.g. traj or would we have to provide our own autograd functionality for each parameter?
Torchkbnuftt also does not work for traj requiring gradients afaik.
We could
As I started doing that work in the torchkbnuftt repo already and we can look up the equations in https://arxiv.org/abs/2111.02912 and the finnuft pytorch wrapper.
I will try to mock something up tonight or tomorrow morning
Now this can be enabled for operators, using the adjoint_as_backward setting, i.e.
class Op(LinearOperator, adjoint_as_backward=True)
. The default is false, it has to be enabled for each Operator that should use this.
Alternative to #307 using init_subclass as I suggested in #68