Closed JunMa11 closed 5 years ago
Hi Jun Ma,
you are right - the documentation of how to run the Decathlon datasets is incomplete. I will fix this some time later today. Right now, the decathlon dataset preprocessing pipeline requires FSL for the splitting of modalities. You can, however, do the splitting manually as well to skip that step - then you don't need FSL.
base
is the base folder for the raw data. In base
, nnU-net will create three subdirectories: nnUNet_raw
(here you put the downloaded Decathlon datasets), nnUNet_raw_splitted
(here nnU-net will save the splitted data. If you are doing the splitting yourself then put the splitted data in here) and nnUNet_raw_cropped
(don't touch this).
You also have to set preprocessing_output_dir
, otherwise nnunet will not know where to save preprocessed data.
network_training_output_dir
: you need set this as well.
Task04_Hippocampus is a 3D dataset, why does split_4d run?
splitting this won't do anything, so why not? In the decathlon I didn't know what I would be getting in phase II so I just run this for all datasets
Do we need to convert the data patientID.nii.gz in Task04_Hippocampus to patientID_0000.nii.gz?
If you let nnU-Net do everything (including splitting) then you don't have to do that. It will do it will do it for you. If you do the splitting manually, then you need to set the names with the _0000.nii.gz suffix
I hope this helps! Best, Fabian
Hi Fabian,
Thanks for your help. Please give me few days to run the experiments again. If it works well, I will close this issue.
Best, Jun
Dear DKFZ,
Thanks for the great respo. I have some problems on the path setting.
Enviroment: linux, pytorch 1.0 The installing works well.
Data preparation. I download the
Task04_Hippocampus
dataset from the medical segmentation decathlon, and put it intopath/nnUNet/nnunet
.Step 1. Set base =
path/nnUNet/nnunet/Task04_Hippocampus
. Step 2. Runpython experiment_planning/plan_and_preprocess_task.py -t Task04_Hippocampus
, following error occurred:At the same time, two folders (
nnUNet_raw
andnnUNet_raw_splitted
) are generated inpath/nnUNet/nnunet/Task04_Hippocampus
. I modify network_training_output_dir asStep 3. Besides, I put the
Task04_Hippocampus
dataset intopath/nnUNet/nnunet/Task04_Hippocampus/nnUNet_raw/
andpath/nnUNet/nnunet/Task04_Hippocampus/nnUNet_raw_splitted/
but a new error occurred:Question:
base
, do we need to set preprocessing_output_dir and network_training_output_dir?Task04_Hippocampus
is a 3D dataset, why doessplit_4d
run?patientID.nii.gz
in Task04_Hippocampus topatientID_0000.nii.gz
?I also read the introduction in challenge_dataset_conversion. It well describes how to convert personal dataset to make it compatible with nnU-Net, especially for multi-modality data.
nnU-Net
was initially developed for MSD challenge, It would be better provide an example for MSD dataset, too. I recommendTask04_Hippocampus
, because this dataset is very small.