Closed catskillsresearch closed 3 years ago
I suspect your network configuration isn't compatible with patches that are that small. You're creating your UNet with
model = UNet(
dimensions=3,
in_channels=1,
out_channels=2,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
norm=Norm.BATCH,
).to(device)
With this configuration your network will have 5 layers and with strides of 2 for each downsampling you will halve the spatial dimensions of the input image 4 times. This means your image sizes in the network are 20**3 -> 10**3 -> 5**3 -> 2**3 -> 1**3
. Attempting to upsample this in the decode path of the network will not work because of the default padding in the upsample convolutions.
I would suggest using patch sizes that are multiples of powers of 2 which, for a dimension of M*2**N allows you to downsample N times. In your case what you can do is use a patch size of 32 and stack your volumes only once so the depth dimension is 40, or double each slice in your volume to get the same.
Alternatively you can stick with 20**3 as your patch size and use (64,128,256)
as your channels argument and (2,2)
as your strides argument to make a shallower network and see how that works.
Thanks very much Eric. There are 20 planes in the MRI. Should I be trying to "thicken" the plane volumes so that they correspond more to the physical dimensions of the X-Y plane, or is that irrelevant? If so, how do I do that? My MRI images are 20 NumPy bitmaps. I think something like this is done in the Spleen tutorial notebook here:
Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")),
The Spleen example from NVidia uses a Nifti format that comes with metadata including an Affine transform that the Spacingd
transform accesses. What would be the corresponding arguments for Spacing
transform?
Also, one last question. The tumors (in this case, from Neurofibromatosis 1) are very sparse in the whole MRI volume. So 98% of the volume is non-tumor and 2% is tumor. Do you have any thoughts on how to increase accuracy for this very sparse class? I have only 50 full labelled examples to work with. Should I explore Waterloo's "Less than one shot" learning technique?
For Spacing
you provide a multiple which reduces the size so if you want to go from 1200x340x20 to 1200x340x40 you want something like this:
from monai.transforms import Spacing
s=Spacing(pixdim=(1, 1, 0.49))
t=torch.rand(1,1200,340,20)
print(s(t)[0].shape) # (1, 1200, 340, 40)
For using Spacingd
you would have the same pixdim
values as this example.
I'm honestly not sure what to comment on your particular problem, I shall ask my colleagues to see if anyone has something specific to this particular segmentation problem to contribute.
Thanks Eric! Intuitively I don't think it should matter. The spleen tutorial has the pixdim=(1.5, 1.5, 2.0)
. I'm curious what motivated that. It seems like they were doing it to reduce the size of the inputs:
from monai.transforms import Spacing
import torch
s=Spacing(pixdim=(1.5, 1.5, 2.0))
t=torch.rand(1,226,257,113)
print(s(t)[0].shape) # (1, 151, 172, 57)
Regarding the overall number of samples (50 in my case), and sparsity in the image set of positive labels (tiny tumors), any thoughts will be very welcome.
This is in the context of the Children's Tumor Foundation Hack for NF which is still open for participation.
Hello! I work with sparse data (cerebral microbleed segmentation), and I would recommend you look into the following:
Generally with very imbalanced data it takes a lot of patience to fine-tune the network.
@Irme thanks. Please consider joining the CTF NF Hackathon which is still open and has plenty of medical data. Here are some details:
https://nfhack-platform.bemyapp.com/#/event We have more than 500 participants registered from around the world as well as more than 40 mentors and 21 projects in progress. Registration remains open throughout the Hackathon and projects can be submitted up until November 13th.
Describe the bug I am trying to adapt the spleen_segmentation_3d.ipynb notebook to imaging data with a slightly different shape. The images in the Spleen set are 226x257 with 113 planes in the stack. My images are 1200x340 with 20 planes in the stack. The notebook samples the data in cubes of size 96x96x96. To get the example notebook to work, I have to duplicate my data on the planes to be 20+20+20+20+16 = 96. Otherwise it breaks, for the obvious reason that you can't get 96 slices out of 20.
Suppose however that I change the cube size to 20x20x20, so I don't duplicate planes to match the exact setup of the notebook. I still get a problem. Here is the problem, please let me know how to resolve it:
To Reproduce Here is the code:
Expected behavior The UNet should train and not break.
Environment (please complete the following information): OS: Ubuntu 20.04LTS MONAI version: 0.3.0 Python version: 3.8.2 (default, Mar 26 2020, 15:53:00) [GCC 7.3.0] OS version: Linux (5.4.0-52-generic) Numpy version: 1.18.1 Pytorch version: 1.5.0 MONAI flags: HAS_EXT = False, USE_COMPILED = False
Optional dependencies: Pytorch Ignite version: 0.3.0 Nibabel version: 3.1.0 scikit-image version: 0.16.2 Pillow version: 7.1.2 Tensorboard version: 2.2.1 gdown version: 3.12.2 TorchVision version: 0.6.0a0+82fd1c8 ITK version: 5.1.1 tqdm version: 4.50.2
Additional context I am trying to do tumor detection on whole-body MRI scans. The tumors are small and the body is large. So far this is giving me an average F1 score of 0.17 using this library, training with the 20+20+20+20+16 stacking workaround.