Closed balbasty closed 4 years ago
I agree this is an important one. At the moment I am permuting the nibabel array as .permute([2, 1, 0]) to make it compatible with nitorch.spatial. But if we don't permute the data, we would need to 'permute' the affine matrix right? So it seems to be an issue no matter what (in the case of many neuroimaing s/w and how they store data and affine transformations), or?
For some reason, I thought that the dimension ordering from nibabel would be opposite to that of spm. Since they are the same, I agree that it is better to be consistent with nibabel.
I'll change everything so that all arguments are ordered as (D, H, W). This will break consistency with PyToch, but it is not a big deal. That will take a bit of time, I will create a branch to work on this.
Sounds great! Will improve code readability too; e.g., in the superres the arrays holding x and y are permuted at the end of the algorithm, as to write to nifti with nibabel.
Here's the convention currently used in nitorch (copied from spatial.py):
NiTorch uses consistent ordering conventions throughout its API:
These conventions are mostly consistent with those used in PyTorch (conv, grid_sampler, etc.). However, some care must be taken when, e.g., convolving deformation fields or allocating tensors based on grid dimensions.
However, it is very counter-intuitive to sometime think in (D, H, W), and sometimes in (W, H, D). If we have to change this convention, it is better to do it very early in the project. After, it will be too late.
I think that what matters most is to be consistent with how nibabel loads volumes: in particular how rows/columns of the affine matrix map to dimensions of the loaded ndarray. We don't want users to have to permute dimensions of the loaded arrays.
@brudfors