MIC-DKFZ / nnUNet

Apache License 2.0
5.71k stars 1.73k forks source link

Multi-modality dataset conversion issue #306

Closed yuanyuan961 closed 4 years ago

yuanyuan961 commented 4 years ago

Hi Fabian, I have a question when dealing with multi-modality dataset. I saw the part for dataset.json in dataset conversion instruction. You gave an example from MSD Prostate task. I try to create my custom dataset.json for my private multi-modality dataset. Then I wonder if you make the multi-modality images as separate images with one common label mask? Now I want to do things like this: Multi-modality images as multi channel input (multi-modality images actually as one image) and train with 2D UNet. Does this work with nnUNet? If does, then how to create the dataset.json file? I did notice you offered .py to create dataset.json for BRATS dataset and it is indeed a multi-modality dataset. But I notice the current json file process each modality as one independent image, or maybe I miss some important details? I hope this does not bother you and I really appreciate your favor.

Best, Yuan. image image

FabianIsensee commented 4 years ago

Hi, this absolutely works with nnU-Net. Please follow the prostate and BraTS examples. Some things to note:

Best, Fabian

yuanyuan961 commented 4 years ago

Hi, this absolutely works with nnU-Net. Please follow the prostate and BraTS examples. Some things to note:

  • nnU-Net only likes 3D images, so please leave your images 3D and let nnU-Net handle the slicing
  • all modalities must be registered and have the same geometry
  • there can only be one segmentation map for each training image (as in BraTS: T1+T1c+T2+Flair -> one segmentation).
  • if you have multiple modalities, save them as patient0_0000.nii.gz, patient0_0001.niingz, ... patient1_0000.nii.gz, patient1_0001.nii.gz. Make sure that 0000 and 0001 always point to the same modalities.

Best, Fabian

@FabianIsensee Thanks for your reply. Now I have one question when creating the dataset.json. I did notice the example json file. That is,

[{"image":"./imagesTr/prostate_16.nii.gz","label":"./labelsTr/prostate_16.nii.gz"},{"image":"./imagesTr/prostate_04.nii.gz","label":"./labelsTr/prostate_04.nii.gz"},...]

This case represents one image (only one modality) with one label. But how is that in the situation of multi-modality? I think there are two options as follows:

First, [{"image": image 1 modality 1,"label": label of image 1}, {"image": image 1 modality 2,"label": label of image 1}, {"image": image 1 modality 3,"label": label of image 1}...] this option treats each modality as independent images, but I don't know how each modality interact with other modalities in this option.

Second, [{"image": image 1 modality 1; image 1 modality2; image 1 modality 3, "label": label of image 1}...]

I am not sure if second option is correct with the whole code. But I think my concern is how to create the dataset.json for multi-modality case and how the whole code deal with each modality? (I guess one possibility of treating multi-modality as multi-channel image, as you mentioned nnUNet will handle the slicing and maybe the network will combine corresponding slices of each modality and than do the following convolution, or maybe the other ways?)

FabianIsensee commented 4 years ago

Please just look at the examples. The dataset.json is a remnant of the medical segmentation decathlon. You only need to specify the name of the image in these lists. You do not need to list the modalities. Again, if you are unsure, look at the dataset.json from task01 or task05 of the medical segmentation decathlon. Best, Fabian

HuiLin0220 commented 1 year ago

What if some cases have two modalities, some have one, some have three....?