MIC-DKFZ / nnUNet

Apache License 2.0
5.91k stars 1.76k forks source link

How can I train a nnUnet to predict for 2d slices? #2463

Open Chuyun-Shen opened 2 months ago

Chuyun-Shen commented 2 months ago

I have a 3D MRI dataset, and I want to train a 2D network so that the network can predict the most central coronal slice (a single 2D slice). For training, I would like to use the most central 20 2D slices to achieve robustness.

I have some approaches in mind, but since I'm not very familiar with nnU-Net v2's code, I hope you can help me determine which one might be better:

  1. Convert the 3D images into multiple PNGs, scale the images so that the spacing is 1x1x1, and then train the model.
  2. Train a 2D network using the 3D NIfTI (.nii.gz) images directly, and then use the 2D network for inference on individual slices. I’m unsure how to perform inference on 2D slices from a 3D volume.
  3. Convert the 3D images into 2D images of shape [0] x shape[1] x 1 in .nii.gz format to preserve some header information. However, this results in 2D 'patch_size': (np.int64(576), np.int64(1)) in 2D plan generated by nnUNetv2_plan_and_preproces.

Looking forward to your reply. Thank you in advance.