GabriellaKamlish / BrainResection

0 stars 0 forks source link

Image orientation #4

Open GabriellaKamlish opened 3 years ago

GabriellaKamlish commented 3 years ago

I am currently working on the task of creating the function to generate a PNG image and resection hemisphere but I have a couple of queries about the processing before the analysis.

We discussed on Friday that in order to understand the dimensions in which the images are being sliced I need to analyse the orientation of the image which can be seen when printing the torchio dataset image:

  1. The orientation of the FPG data is "PIR+", I've tried googling what this means in terms of co-ordinate systems however I can't seem to find it. The 3D slicer page describes the orientation's anatomically using LPS and RAS only, so I can't work out the matrix transformation to convert PIR to either of these co-ordinate systems.

  2. The function I am writing takes in 2 paths to the already resected data, of which I can't see the orientation because it is processed by resector and no longer part of the torchio datasets. So there when I print the image data there is obviously no orientation. How do I determine the orientation of these resected images without access to the original preprocessed data?

fepegar commented 3 years ago

The orientation of the FPG data is "PIR+", I've tried googling what this means in terms of co-ordinate systems however I can't seem to find it. The 3D slicer page describes the orientation's anatomically using LPS and RAS only, so I can't work out the matrix transformation to convert PIR to either of these co-ordinate systems.

LPS and RAS are specific cases. LPS means indices in the 1st axis (or dimension) increase from right to left, 2nd from anterior to posterior and 3rd from inferior to superior. RAS means indices in the 1st axis (or dimension) increase from left to right, 2nd from posterior to anterior and 3rd from inferior to superior. PIR means indices in the 1st axis (or dimension) increase from anterior to posterior, 2nd from superior to inferior and 3rd from left to right.

The '+' is just to ensure that P means A->P, as some conventions mean P->A when they write P (which is very confusing!). This is explained in NiBabel docs.

You could try to work out the matrices to convert orientations or you could just use torchio.ToCanonical which just converts the image to RAS+.

The function I am writing takes in 2 paths to the already resected data, of which I can't see the orientation because it is processed by resector and no longer part of the torchio datasets. So there when I print the image data there is obviously no orientation. How do I determine the orientation of these resected images without access to the original preprocessed data?

It doesn't matter that an image is not in torchio.datasets. You can always print the info. To make sure it prints all the info, you can load() it first.

In [1]: import torchio as tio

In [2]: image = tio.ScalarImage('.cache/torchio/NIFTI_ovine_05mm/ovine_model_05.nii')

In [3]: image
Out[3]: ScalarImage(path: ".cache/torchio/NIFTI_ovine_05mm/ovine_model_05.nii"; type: intensity)

In [4]: image.load()

In [5]: image
Out[5]: ScalarImage(shape: (1, 241, 317, 243); spacing: (0.50, 0.50, 0.50); orientation: LPS+; memory: 70.8 MiB; type: intensity)

Moreover, you don't need access to the original data if the orientation is correctly saved in resector output (it is). You just need to know which is the lateral axis in the input (the one corresponding to L or R), and whether indices along that axis grow towards right (R) or left (L). Then you can slice the image along that axis and sum the positive voxels to get the resection volumes for each hemisphere (assuming the mid-sagittal plane is close to the origin in world coordinates and the head is not too tilted, which should be the case for the images I'll give to you).

For example, we could say this image is in IL orientation (no anterior or posterior as it's 2D) because when we load it with e.g. scikit-image, the first axis will go towards his feet and the second will grow towards hist left hand. Of course, this information is not stored in the header because it's not a medical image.

In [1]: from skimage import io

In [2]: array = io.imread('image.webp.jpg')

In [3]: si, sj = array.shape[:2]

In [4]: superior = array[:si // 2, :]  # second index actually not necessary

In [6]: inferior = array[si // 2:, :]  # second index actually not necessary

In [7]: right = array[:, :sj // 2]  # subject's right, not ours

In [8]: left = array[:, sj // 2:]  # subject's left, not ours

If I apply some transformation to the image, I might change its orientation. This version image webp is now SR.

Does that make sense?

GabriellaKamlish commented 3 years ago

I have managed to complete the task by loading and getting the data using nibabel. However I am trying to improve the function by implementing the torchio ToCanonical function, but I am unsure how to get the data from the image that has been transformed, seeing as ScalarImage has no attribute to get data.

fepegar commented 3 years ago

ScalarImage has no attribute to get data

Not sure what you mean:

In [1]: import torchio as tio

In [2]: import torch

In [3]: image = tio.ScalarImage(tensor=torch.rand(1,2,3,4))

In [4]: image.data
Out[4]:
tensor([[[[0.5971, 0.9597, 0.5409, 0.0750],
          [0.2110, 0.6427, 0.0813, 0.2405],
          [0.1153, 0.1763, 0.7544, 0.7609]],

         [[0.7706, 0.1729, 0.7450, 0.5170],
          [0.5165, 0.3528, 0.9903, 0.4870],
          [0.1704, 0.3933, 0.0092, 0.2803]]]])
fepegar commented 3 years ago

Why do you want to reimplement it? It uses NiBabel to transform the image to RAS.

GabriellaKamlish commented 3 years ago

Because the converted image was not being loaded by nibabel in my previous function I was just reloading the original image. But I fixed it and everything works now!