Open fzimmermann89 opened 3 months ago
..it would be nice to also implement a normal operator for the Fourier operator.
It would be good to implement a function to obtain the kernel for the whole operator, including any nufft, sorting, etc. It would also need determine the required oversampling in different directions.
Looks like an interesting alternative to trochkbnufft. Unfortunately their installation process is still a bit experimental: https://flatironinstitute.github.io/pytorch-finufft/installation.html#installation-from-pypi
If we make all our LinearOperatos autograd-smart (#68) we would not even need to use pytorch-finufft for now and could just call cufinufft / finufft ourselves in a LinearOperator.
This also has the advantage, that we would be able to setup the fft-plans in the init and reuse them for multiple calls (this is currently not supported in pytorch-finufft)
At some point, it would still be nice to use pytorch-finufft, as this also implements the gradients with respect to the kspace points, i.e. we would be able to optimize the trajectory. But this would also requiere some rework of our opertator design anyways......
cufinufft from pypi makes pytorch-cufinufftt pass its tests pytorch 2.4.0.dev20240602+cu121 conda env with 12.1 cuda runtime on e...51 (rtx 2080 with cuda 12.1 driver)
this would also solve the autograd issues with torchkbnufft
We should evalute https://github.com/flatironinstitute/pytorch-finufft/tree/main/pytorch_finufft
Max and me tried some time ago finufft on cpu for his stuff, and it was faster than torckkbnufft I spend the last two days digging through strange forking logic and threading issues in torchkbnufft. It might be easier to switch to pytorch_finufft than to make torchkbnufft fully compatible with torch.func.....