I created this issue as a discussion on what other extensions the finufft library could provide that are still within the grasp of what it tries to achieve. Mainly, I hope it could bring some light to the reply @ahbarnett provided in #306. As such, the listed points will be written as if answered to that comment, so it is probably nice to read that comment first and also perhaps the expertly written inverse nufft tutorial.
First, I see how exposing the spreading/interpolation operations is problematic. I agree that the nufft is the crucial feature of the library, and if other spreaders/interpolators that would make the nufft faster were found that should be prioritized! I think exposing just the spreader and interpolator in C/C++ would still be useful, but users would have to be carefull or aware that they may change between versions and so on. If asking me, by no means make this a priority. Although, as mentioned in @ahbarnett's reply, the sinc^2 weights of Geengard, Inati, et al, would be very useful! So, if it would be feasible to implement an interface to sinc^2 quadrature weights that would be great! If exposing the fast sinc^2 in the process seems reasonable that would also be nice.
As pointed out. In MRI applications one often tries to minimize ||Ax-b|| where A represents a nufft. A may be both a 2D or 3D nufft depending on the particular problem. The iterative solvers have to repeatedly perform the operation A^HA, the normal operator. Sometimes to improve convergence, one instead solves ||W(Ax-b)||, where W is a diagonal matrix such that the iterations computes A^HWA instead. Offering up best likelihood estimation for convergence speed. Preconditioning might seem like a better alternative, and indeed sometimes it is, but sometimes also not. Preconditioning makes some of the proximal algorithms different and more difficult. The W used for convergence speedup is also often the same as the "density compensation weights" mentioned. Sometimes the density compensation weights are also used to perform a "gridding reconstruction". That is, one finds W such that instead of solving ||Ax-b|| via an iterative solver one just does x = A^H(Wb), this is approximate but fast! Also mentioned in the inverse tutorial (and often utilized in MRI), the toeplitz approach can greatly increase the speed of performing A^HA. But this approach is also sometimes not viable. For some problems, multiple different A^HA (coordinates differ) has to be performed in each iteration. Storing the toeplitz kernels for these kernels takes up too much memory so computing A^HA via nuffts has to be used instead. Also sometimes other modifications to the minimization problem taking into account off-resonance effects and such makes the toeplitz approach less feasible. Actually these problems are discussed in the Fessler article referenced in the inversion tutorial.
After all this text I would like to provide some features that would be really useful for MRI users that hopefully are still within the finufft grasp. I list them in order of importance.
A fast normal operator! It would be beneficial to provide an interface that exposes the normal operator. That is an operation that computes A^HA. I was hoping that perhaps this lends itself to speedup also, or, smaller memory footprint. As you don't need two fft plans right? And perhaps fewer fftshifts are required aswell? It would be beneficial here if an alternative diagonal vector could be supplied for this operation also, such that operations as A^HWA can be performed. I don't think this operation would expose any inner working parts of how the nuffts are performed and seems to be a reasonable extension of finufft as it is basically just what is required to perform inverse nufft.
A toeplitz kernel creation interface. For users who wants to use toeplitz for inversion it would be great if finufft provided an interface that calculated the kernels. Hopefully so they are real and with hermitian symmetry too!
sinc^2 quadrature weights. Also, other reasonably good (very unspecific, I know), if available/you know of, quadrature weights would also be interesting!
As I see it these features are all within the subject of what finufft strives to achieve, especially the first one. Atleast, if one considers inverse nufft as a goal. And I also think that changes to inner workings of how the nufft is performed could be made without this interface being unstable.
I now see that there actually was a discussion post on this topic, don't know how i missed that! 😮 I will move this post there instead and close this issue!
I created this issue as a discussion on what other extensions the finufft library could provide that are still within the grasp of what it tries to achieve. Mainly, I hope it could bring some light to the reply @ahbarnett provided in #306. As such, the listed points will be written as if answered to that comment, so it is probably nice to read that comment first and also perhaps the expertly written inverse nufft tutorial.
First, I see how exposing the spreading/interpolation operations is problematic. I agree that the nufft is the crucial feature of the library, and if other spreaders/interpolators that would make the nufft faster were found that should be prioritized! I think exposing just the spreader and interpolator in C/C++ would still be useful, but users would have to be carefull or aware that they may change between versions and so on. If asking me, by no means make this a priority. Although, as mentioned in @ahbarnett's reply, the sinc^2 weights of Geengard, Inati, et al, would be very useful! So, if it would be feasible to implement an interface to sinc^2 quadrature weights that would be great! If exposing the fast sinc^2 in the process seems reasonable that would also be nice.
As pointed out. In MRI applications one often tries to minimize ||Ax-b|| where A represents a nufft. A may be both a 2D or 3D nufft depending on the particular problem. The iterative solvers have to repeatedly perform the operation A^HA, the normal operator. Sometimes to improve convergence, one instead solves ||W(Ax-b)||, where W is a diagonal matrix such that the iterations computes A^HWA instead. Offering up best likelihood estimation for convergence speed. Preconditioning might seem like a better alternative, and indeed sometimes it is, but sometimes also not. Preconditioning makes some of the proximal algorithms different and more difficult. The W used for convergence speedup is also often the same as the "density compensation weights" mentioned. Sometimes the density compensation weights are also used to perform a "gridding reconstruction". That is, one finds W such that instead of solving ||Ax-b|| via an iterative solver one just does x = A^H(Wb), this is approximate but fast! Also mentioned in the inverse tutorial (and often utilized in MRI), the toeplitz approach can greatly increase the speed of performing A^HA. But this approach is also sometimes not viable. For some problems, multiple different A^HA (coordinates differ) has to be performed in each iteration. Storing the toeplitz kernels for these kernels takes up too much memory so computing A^HA via nuffts has to be used instead. Also sometimes other modifications to the minimization problem taking into account off-resonance effects and such makes the toeplitz approach less feasible. Actually these problems are discussed in the Fessler article referenced in the inversion tutorial.
After all this text I would like to provide some features that would be really useful for MRI users that hopefully are still within the finufft grasp. I list them in order of importance.
As I see it these features are all within the subject of what finufft strives to achieve, especially the first one. Atleast, if one considers inverse nufft as a goal. And I also think that changes to inner workings of how the nufft is performed could be made without this interface being unstable.