fepegar / torchio

Medical imaging toolkit for deep learning
https://torchio.org
Apache License 2.0
2.07k stars 241 forks source link

Is there a convenient way to plot a slice from a patient volume? #180

Closed farazahmeds closed 4 years ago

farazahmeds commented 4 years ago

Is there any convenient way to display for example nth-slice of subject image of subjects_dataset[0] ?

I want to visualize slices as I apply tranforms to them, the only thing I came up with is converting the the 3D Tensor back to .nii image and from that plotting the slice.

Can you suggest any convenient method ?

Thankyou and much appreciated.

Faraz

fepegar commented 4 years ago

What about this:

import torchio
import matplotlib.pyplot as plt
n_slice = 90
subject = torchio.datasets.Colin27()
sample = torchio.ImagesDataset([subject])[0]
data = sample['t1'].numpy()[0]
plt.imshow(data[n_slice])
plt.show()

Figure_1

Did you solve the visualization issues on Colab? And if yes, how?

fepegar commented 4 years ago

Btw you can easily make that figure prettier using a more natural colormap, a proper orientation, world coordinates...

prettier_slice = np.rot90(np.fliplr(data[n_slice]))
plt.imshow(prettier_slice, cmap='gray', origin='lower')
plt.show()

Figure_2

romainVala commented 4 years ago

Hello

nibabel, has also a very simple 3D viewer, but it can be useful

from nibabel.viewers import OrthoSlicer3D as ov
data = sample['t1'].numpy()[0]
ov(data)

We develop a viewer, to look at several subject at the same time. The nice feature, is that it can take the torchio.ImagesDataset, so if the image dataset is instanced with a transform you will see the transformed data. This is still in development, so to be taken with care, but you can give a try if you are interested: https://github.com/romainVala/torchQC/blob/master/plot_dataset.py (you can only donwload this function only, it is independent of the rest of the code)

fepegar commented 4 years ago

@romainVala I get an error when trying to use that viewer:

AttributeError: draw_artist can only be used after an initial draw which caches the renderer

Has this happened to you too?

Also, can you share a screenshot of the output of your PlotDataset? It looks good!

romainVala commented 4 years ago

I did not see such errors, may be @GFabien has an idea (since it is its first works ... many thanks to him!)

Well this is a very preliminary work, and there are already quite a lot of parameters, which should make it very modular, What I really appreciate it the possibility, to compare a lot of subject Here is an example where I choose 2 view per subject

    int_plot = PlotDataset(dataset, subject_idx=64, update_all_on_scroll=True, add_text=False,
                           subject_org=(8,8), views=(('sag','vox',40),('ax','vox',50)), view_org=(1,2),
                           image_key_name = image_key_name
                           )

h2_orig_val_hcp_T1_rescal

And we can change all slices simultaneously ! (with mouse scrolling) For now we can not change the contrast, but the idea is to see the data exactly as the CNN will see it .. so you can choose specific intensity scaling transforms to have a more homgeneous contrast between subject (or not depends of the user choices)

fepegar commented 4 years ago

Awesome work! Does it assume RAS orientation?

P.S.: I got the error on the NiBabel viewer, not yours!

fepegar commented 4 years ago

Also this Slicer module can be used to experiment and visualize: https://github.com/fepegar/SlicerTorchIO

I'm working on publishing it in the official extension repository.

romainVala commented 4 years ago

P.S.: I got the error on the NiBabel viewer, not yours!

no, I use it from a long time, (it is basic but usefull and quick) (nibabel version 3.0.0

GFabien commented 4 years ago

Awesome work! Does it assume RAS orientation?

No it does not, if you want to ensure RAS orientation you have to do it at the dataset level.

What I like most about this viewer is that you can scroll to navigate between your views to investigate your whole volume!

romainVala commented 4 years ago

We want to show to data exactly as it is deliver by torchio, so as pointed by Fabien you could add the transform. (and if not, you should see the difference, ... (depending of your dataset of course)

but a possible improvement would be to provide the information with a text legend, showing where is the patient's RAS is on the plotted slice. (doing so would be nice to demonstrate the role of torchio.ToCanonical which is not obvious for non MRI users)

farazahmeds commented 4 years ago

What about this:

import torchio
import matplotlib.pyplot as plt
n_slice = 90
subject = torchio.datasets.Colin27()
sample = torchio.ImagesDataset([subject])[0]
data = sample['t1'].numpy()[0]
plt.imshow(data[n_slice])
plt.show()

Figure_1

Did you solve the visualization issues on Colab? And if yes, how?

Thanks it did work, but there is a problem, the image width gets cropped for some reason.

here is an example, it's supposed to be 255x255

https://i.imgur.com/4kDvxRH.png


The visualization issue was all me, NIwidget is only meant to be used with Jupiter Notebook, and since I am using PyCharm it sure gave me a problem. I instead tried ipywidgets, it's functional but doesn't work as well, but I will look into it again.

fepegar commented 4 years ago

This looks like your image is anisotropic, i.e. voxels are not cubes. Matplotlib doesn't understand anisotropic pixels. I think you can play with the extent and aspect of imshow.

You could preprocess your images with torchio.Resample to make them isotropic before processing them.

On Thu, 4 Jun 2020 at 14:31, Faraz Ahmed notifications@github.com wrote:

What about this:

import torchioimport matplotlib.pyplot as pltn_slice = 90subject = torchio.datasets.Colin27()sample = torchio.ImagesDataset([subject])[0]data = sample['t1'].numpy()[0]plt.imshow(data[n_slice])plt.show()

[image: Figure_1] https://user-images.githubusercontent.com/12688084/83698440-559cca80-a5f9-11ea-927e-ae4dbeaad1f8.png

Did you solve the visualization issues on Colab? And if yes, how?

Thanks it did work, but there is a problem, the image width gets cropped for some reason.

here is an example, it's supposed to be 255x255

https://i.imgur.com/4kDvxRH.png

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/fepegar/torchio/issues/180#issuecomment-638848995, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADAZVVEN6JFCOFENZYZS5NDRU6O3LANCNFSM4NSCJ2GA .

farazahmeds commented 4 years ago

This looks like your image is anisotropic, i.e. voxels are not cubes. Matplotlib doesn't understand anisotropic pixels. I think you can play with the extent and aspect of imshow. You could preprocess your images with torchio.Resample to make them isotropic before processing them.

Ahh that makes sense, I'll do just that. Thanks!

fepegar commented 4 years ago

I've added a spacing property that you can use to check your image spacing within TorchIO:

>>> import torchio
>>> subject = torchio.datasets.Colin27()
>>> sample = torchio.ImagesDataset([subject])[0]
>>> sample.t1.spacing
(1.0, 1.0, 1.0)
fepegar commented 4 years ago

Actually, now that I look at it again, your image does not look 255 x 255 but something like 255 x 58. You might be along the wrong axis.

If you print the image.orientation, image.spacing and image.shape that'll give us more information to help you.

You can slice along other axes:

data[n_slice, :, :]
data[:, n_slice:, ]
data[:, :, n_slice]
farazahmeds commented 4 years ago
data[n_slice, :, :]
data[:, n_slice:, ]
data[:, :, n_slice]

yea now it makes sense it loads incorrectly, I am getting shape (256, 256, 57) 57 being the width.

shouldn't it be loaded as (57,256,256) ?

fepegar commented 4 years ago

Great! Why do you assume that shape? The concept of "width" is not well defined in medical images. See https://github.com/fepegar/torchio/issues/176.

farazahmeds commented 4 years ago

Great! Why do you assume that shape? The concept of "width" is not well defined in medical images. See #176.

ahh yea it's R A S

Is there any work around that ? should I permute the axes(256, 256, 57) if I have to display an entire image ?

fepegar commented 4 years ago

It depends on what plane you want to visualize. You can just plot data[..., n_slice]. Here's some code that might be helpful:

https://github.com/fepegar/miccai-educational-challenge-2019/blob/94067cdb99830ea21e51964ac9c20a34c28aaea4/visualization.py#L113-L173

It will look like this:

Screen Shot 2020-06-04 at 15 13 59
romainVala commented 4 years ago

your RAS annotation are wrong ... I quickly test the plot_volume_interactive function, but it was not interactive ... is it suppose to move slices by mouse click ?

fepegar commented 4 years ago

How do you know they're wrong? Note that the function assumes the input images are RAS.

The interactive version should work on a Jupyter Notebook, maybe on IPython too. Is that where you tried?

romainVala commented 4 years ago

sorry they are correct, I was expecting (R L on the image) but you put it on the axes, ok (may be it will be clearer, to put AP RL IS instead, they will give also the direction ...) yes I try it on a ipython consol

fepegar commented 4 years ago

Exactly, they're not meant to be labels as in FSL, but standard axes labels, like x or y.

If it doesn't work on IPython, you can try on a notebook here: https://colab.research.google.com/drive/1vqDojKuC4Svb97LdoEyZQygm3jccX4hr

There is a cell with interactive_plots = False, you'll need to set that to True.

farazahmeds commented 4 years ago

It depends on what plane you want to visualize. You can just plot data[..., n_slice]. Here's some code that might be helpful:

https://github.com/fepegar/miccai-educational-challenge-2019/blob/94067cdb99830ea21e51964ac9c20a34c28aaea4/visualization.py#L113-L173

It will look like this:

Screen Shot 2020-06-04 at 15 13 59

It depends on what plane you want to visualize. You can just plot data[..., n_slice]. Here's some code that might be helpful:

https://github.com/fepegar/miccai-educational-challenge-2019/blob/94067cdb99830ea21e51964ac9c20a34c28aaea4/visualization.py#L113-L173

It will look like this:

Screen Shot 2020-06-04 at 15 13 59

I tried changing the axes and it displays the image correctly, the default torch.Size([1, 256, 256, 56]) return from torchio.ImagesDataset seems at fault ?


[sample = torchio.ImagesDataset([patient_one])[0]
data = sample['t1']['data'].numpy()[0]
print (sample['t1']['data'].shape)   #Shape ([1, 256, 256, 56]) //ImagesDataset doesn't seem to return correct shape?
plt.imshow(data[n_slice]) #crops the width to 56 ? can it be that correct shape should be following the D,H,W -> ([1, 56, 256, 256]) instead of ([1, 256, 256, 56])
plt.show()

#permuting axes

data_transposed = np.transpose(data, (2, 0, 1))
print (data_transposed.shape)  #Shape ([1, 56, 256, 256])  //iterates in right order from slice 1-56, each 256x256
for slice in data_transposed:
    plt.imshow(slice)
    plt.show() #shows correct slice

Looking at it I feel like I need to permute the axes before feeding it to train/val set ?

fepegar commented 4 years ago

You just need to slice your 3D volume along the third axis, which is the one that goes from inferior (slice 0) to superior (slice 55), if your image is RAS.

size_r, size_a, size_s = data.shape
for k in range(size_s):
    slice = data[:, :, k]. # or just data[..., k]
    plt.imshow(slice)
    plt.show()