mind-inria / mri-nufft

Doing non-Cartesian MR Imaging has never been so easy.
https://mind-inria.github.io/mri-nufft/
BSD 3-Clause "New" or "Revised" License
54 stars 10 forks source link

Clarification on batched NUFFT #179

Closed mcencini closed 3 months ago

mcencini commented 3 months ago

Hi, I wanted to ask some clarification about the batched NUFFT mode, seemingly supported by finufft/cufinufft and torchkbnufft (cpu/gpu) backends. Specifically, I am focusing on the torchkbnufft backend at the moment.

According to Torchkbnufft documentation, a batch of trajectories is a stack of small k-space trajectories which is processed in parallel, e.g., for dynamic imaging. To enable parallel computation for torchkbnufft, batch dimension should be included in the trajectory tensor, i.e., (n_batchs, n_dim, n_samples) in their notation. However, this does not seems to be used in the mri-nufft interface:

            samples.astype(np.float32, copy=False), normalize="pi"
        )
        self.samples = torch.tensor(samples).to(self.device)

which produce a flattened (nsamples * n_batchs, n_dim) trajectory used as is in both "op" and "adj_op". Before working on a modification of the interface (which I would be glad to!), I wanted to make sure I did not get this completely wrong, and that "batch" axis does not have a different meaning in this context. Thanks!

Matteo

paquiteau commented 3 months ago

Hi Matteo,

the "batched" dimension in MRI-NUFFT is only used for the image domain.

In other words, you have a single k-space sampling trajectory per operator. However, all operators can take an array/tensor of shape (B, <1 or C>, XYZ) (you don't need the coil dimension if you have coil sensitivities map in the operator) and will return you a kspace of shape (B, C, K) (and the other way around for adjoint case)

For doing batched of trajectories applied to a batch of images, you would have to create different NUFFT operators.

On a side note, I would advise to use other backend than torchkbnufft, like gpunufft or cufinufft, that are faster and more memory efficient (see here for first results )

mcencini commented 3 months ago

Many thanks, this is super clear!