Closed tommy-qichang closed 2 years ago
Hi, @tommy-qichang. I'm not sure I understand.
It's a little bit complicated, and please let me know if you have any further questions. Thanks.
5952291_20209_2_0_label_sa.nii.gz
import nibabel as nib
nib.load('5952291_20209_2_0_label_sa.nii.gz')
data = nib.load('5952291_20209_2_0_label_sa.nii.gz').get_fdata()
data.shape
(210, 208, 10, 50)
Thanks for elaborating. I'm still not sure what your B means, so I don't fully understand everything. I think "2D slices + 1D position" is just "3D". There are some 3D+time images in the datasets.
In [1]: import torchio as tio
In [2]: fmri = tio.datasets.FPG(load_all=True).fmri
In [4]: fmri
Out[4]: ScalarImage(shape: (220, 70, 70, 40); spacing: (3.00, 3.00, 3.15); orientation: LAS+; path: "/home/fernando/.cache/torchio/fpg/fmri.nrrd")
In [5]: fmri.plot()
(Shown is the volume corresponding to the first time point)
Let's change the shape of the volume to match the one in your example:
In [6]: resize = tio.Resize((210, 208, 9))
In [7]: remove_time = tio.Lambda(lambda x: x[:50])
In [8]: fmri_new = resize(remove_time(fmri))
In [9]: fmri_new
Out[9]: ScalarImage(shape: (50, 210, 208, 9); spacing: (1.00, 1.01, 14.00); orientation: LAS+; dtype: torch.ShortTensor; memory: 37.5 MiB)
In [10]: fmri_new.plot()
Sampling 2D patches is quite straightforward: just use 3D patch sizes with one of the dimensions set to 1. There is an example in the gallery: Sample slices from volumes
In [11]: patch_size = (210, 208, 1)
In [12]: sampler = tio.UniformSampler(patch_size)
In [15]: subject = tio.Subject(image=fmri_new)
In [16]: patch = next(sampler(subject))
In [17]: patch.shape
Out[17]: (50, 210, 208, 1)
Sampling 3D patches across time is trickier because TorchIO was originally designed for 3D. But I can't really help much because I'm not sure what you're trying to get.
B means batch. Thank you for your quick reply and clarification. I'm quite new to this library so please bear with me for the naive questions. When you set the patch size as (210,208,1), can you also set how many samples for both Z dimension (50)? Or I just need these patches and the queue will do this?
Thanks again.
No worries. The Z dimension is just like X and Y. It's more convenient to think of your data as a 3D volume, rather than a stack of 2D slices. You can use a 3D patch size, e.g. (32, 32, 5)
. The number of samples depends on... how many times you sample. Have you read the docs for Patch-based pipelines?
When you set the patch size as (210,208,1), can you also set how many samples for both Z dimension (50)? Or I just need these patches and the queue will do this?
you can specify the samples_per_volume
attribute from torchio.data.Queue
but in the fernando example you get a random choise (uniform) of the selected slices, so if you ask 50 you may end up with several times the same slice ...
if you want all slice, only once, you should use the GridSampler,
as it as been recently added as acceptable sampling for the queue #520
Thank you @fepegar and @romainVala for your great help. Yeah, I've read the post, and I think TorchIO is a great library for medical 3D or 4D datasets. Previously, I just use pytorch dataset and data loader, which is good for the 2D datasets, but it's not compatible with 3D files shuffling and getting random 2D slices. I'll try and back to you if I need any further suggestion. Thank you again.
No worries! I will close this issue. Please feel free to reopen the issue or a new discussion if you have more questions.
🚀 Feature I'm currently using 4D cardiac MRI data with Nifti format. I just wondering if the UniformSampler could get the 2D or 3D patches from the 4D subjects? For instance, I have a Nifti data shape: (210,208,9,50), but it seems the subject parses it with the last dimension as a channel. As a result, the shape of the subject is (50,210,208,9) I want to sample the image along the last two dimensions: (210,208, 9*50 ). Just wondering if it's doable by using torchIO? Thanks