MIC-DKFZ / nnUNet

Apache License 2.0
5.71k stars 1.73k forks source link

empty dice in multi-class training #439

Closed bkonk closed 3 years ago

bkonk commented 3 years ago

I'm trying to train on some multi-class data with foreground for my three classes as 1.0, 2.0, and 3.0 in the corresponding nifti files. When I open up the label files from the preprocessed folder everything looks as expected. However, when I train the model (2d) I get an error in my first epoch, empty dice scores, and both my training and validation loss plateau immediately at -0.9990. This is my first try with mutli-class labels for nnUNet so maybe I'm doing something wrong?

epoch:  0
/root/anaconda3/lib/python3.8/site-packages/torch/functional.py:1241: UserWarning: torch.norm is deprecated and may be removed in a future PyTorch release. Use torch.linalg.norm instead.
  warnings.warn((
2020-12-17 09:52:38.733872: train loss : -0.3720
2020-12-17 09:52:42.817811: validation loss: -0.9990
/root/anaconda3/lib/python3.8/site-packages/nnunet/training/network_training/nnUNetTrainer.py:702: RuntimeWarning: invalid value encountered in double_scalars
  global_dc_per_class = [i for i in [2 * i / (2 * i + j + k) for i, j, k in
/root/anaconda3/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3334: RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
/root/anaconda3/lib/python3.8/site-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
2020-12-17 09:52:42.821286: Average global foreground Dice: []
2020-12-17 09:52:42.821537: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2020-12-17 09:52:43.462000: lr: 0.009991
2020-12-17 09:52:43.462220: This epoch took 54.552669 s

2020-12-17 09:52:43.462272: 
epoch:  1
2020-12-17 09:53:27.629395: train loss : -0.9990
2020-12-17 09:53:31.659679: validation loss: -0.9990
2020-12-17 09:53:31.661548: Average global foreground Dice: []
2020-12-17 09:53:31.662207: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2020-12-17 09:53:32.288724: lr: 0.009982
2020-12-17 09:53:32.288897: This epoch took 48.826578 s

2020-12-17 09:53:32.288963: 
epoch:  2
2020-12-17 09:54:16.244415: train loss : -0.9990
2020-12-17 09:54:20.324074: validation loss: -0.9990
2020-12-17 09:54:20.324756: Average global foreground Dice: []
2020-12-17 09:54:20.324847: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2020-12-17 09:54:20.948953: lr: 0.009973
2020-12-17 09:54:20.949126: This epoch took 48.660115 s
FabianIsensee commented 3 years ago

This dies not look right. It's supposed to list your labels in these statements:

2020-12-17 09:52:42.821286: Average global foreground Dice: []

Please make sure you set the labels correctly in your label images and that they are also listed correctly in your dataset.json

rohan19250 commented 3 years ago

Hi Fabian,

I am getting a similar issue. I have 3 classes detected by nnUnet, but getting dice scores for only 2 classes. I have checked the labels in slicer, and able to see all 3 classes.

Here is the log file of the training.

dataset.json.zip


###############################################
I am running the following nnUNet: 3d_fullres
My trainer class is:  <class 'nnunet.training.network_training.nnUNetTrainerV2.nnUNetTrainerV2'>
For that I will be using the following configuration:
**num_classes:  3**
modalities:  {0: 'T1', 1: 'T2', 2: 'FLAIR'}
use_mask_for_norm OrderedDict([(0, True), (1, True), (2, True)])
keep_only_largest_region None
min_region_size_per_class None
min_size_per_class None
normalization_schemes OrderedDict([(0, 'nonCT'), (1, 'nonCT'), (2, 'nonCT')])
stages...

stage:  0
{'batch_size': 2, 'num_pool_per_axis': [5, 5, 5], 'patch_size': array([128, 128, 128]), 'median_patient_size_in_voxels': array([134, 165, 133]), 'current_spacing': array([1., 1., 1.]), 'original_spacing': array([1., 1., 1.]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

I am using stage 0 from these plans
I am using sample dice + CE loss

I am using data from this folder:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1
###############################################
loading dataset
loading all case properties
unpacking dataset
done
2021-01-08 02:31:27.137189: lr: 0.01
using pin_memory on device 0
using pin_memory on device 0
2021-01-08 02:32:01.824926: Unable to plot network architecture:
2021-01-08 02:32:01.825288: No module named 'hiddenlayer'
2021-01-08 02:32:01.839259: 
printing the network instead:

........
........
2021-01-08 02:32:01.851447: 

2021-01-08 02:32:01.851905: 
epoch:  0
2021-01-08 02:46:37.841524: train loss : -0.1140
2021-01-08 02:47:29.140253: validation loss: -0.1765
/mnt/data/home/rxb362/miniconda3/envs/nnUnet/lib/python3.8/site-packages/nnunet/training/network_training/nnUNetTrainer.py:702: RuntimeWarning: invalid value encountered in double_scalars
  global_dc_per_class = [i for i in [2 * i / (2 * i + j + k) for i, j, k in
2021-01-08 02:47:29.175359: Average global foreground Dice: [0.6699301012217836, 0.49074388936419067]
2021-01-08 02:47:29.176987: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2021-01-08 02:47:33.862724: lr: 0.009991
2021-01-08 02:47:33.863200: This epoch took 932.011043 s

2021-01-08 02:47:33.863305: 
epoch:  1
2021-01-08 03:01:29.063277: train loss : -0.3499
2021-01-08 03:02:08.681182: validation loss: -0.1943
2021-01-08 03:02:08.684788: Average global foreground Dice: [0.5998248075980414, 0.5930869011021969]
2021-01-08 03:02:08.686005: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2021-01-08 03:02:09.491269: lr: 0.009982
2021-01-08 03:02:09.588167: saving checkpoint...
2021-01-08 03:02:11.475967: done, saving took 1.98 seconds
2021-01-08 03:02:11.484610: This epoch took 877.621196 s

Log file of data integrity which worked fine too.

Verifying training set
checking case 008
checking case 001
checking case 003
checking case 002
checking case 010
Verifying label values
Expected label values are [0, 1, 2, 3]
Labels OK
Dataset OK
008
003
001
002
010

 Task100_UH_brain_peds
number of threads:  (8, 8) 

using nonzero mask for normalization
using nonzero mask for normalization
using nonzero mask for normalization
Are we using the nonzero mask for normalizaion? OrderedDict([(0, True), (1, True), (2, True)])
the median shape of the dataset is  [134. 165. 133.]
the max shape in the dataset is  [158. 171. 138.]
the min shape in the dataset is  [127. 157. 125.]
we don't want feature maps smaller than  4  in the bottleneck
the transposed median shape of the dataset is  [134. 165. 133.]
generating configuration for 3d_fullres
{0: {'batch_size': 2, 'num_pool_per_axis': [5, 5, 5], 'patch_size': array([128, 128, 128]), 'median_patient_size_in_voxels': array([134, 165, 133]), 'current_spacing': array([1., 1., 1.]), 'original_spacing': array([1., 1., 1.]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}}
transpose forward [0, 1, 2]
transpose backward [0, 1, 2]
Initializing to run preprocessing
npz folder: /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_raw/nnUNet_cropped_data/Task100_UH_brain_peds
output_folder: /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 132, 162, 130)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 132, 162, 130)} 

no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 127, 162, 130)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 127, 162, 130)} 

no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 130, 165, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 130, 165, 135)} 

no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 155, 168, 133)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 155, 168, 133)} 

no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 136, 171, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 136, 171, 135)} 

3 4572
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/004.npz
1 7162
no resampling necessary
3 10000
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/006.npz
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 147, 169, 133)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 147, 169, 133)} 

no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 143, 171, 129)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 143, 171, 129)} 

no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 158, 165, 138)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 158, 165, 138)} 

1 10000
3 2964
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/001.npz
1 4976
3 5043
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/008.npz
1 10000
1 4408
1 10000
1 6859
3 8291
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/005.npz
3 9377
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/003.npz
3 341
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/009.npz
3 969
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/002.npz
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 127, 165, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 127, 165, 135)} 

1 1855
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 131, 157, 125)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 131, 157, 125)} 

3 10000
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/011.npz
1 10000
3 10000
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_stage0/010.npz
using nonzero mask for normalization
using nonzero mask for normalization
using nonzero mask for normalization
Are we using the nonzero maks for normalizaion? OrderedDict([(0, True), (1, True), (2, True)])
the median shape of the dataset is  [134. 165. 133.]
the max shape in the dataset is  [158. 171. 138.]
the min shape in the dataset is  [127. 157. 125.]
we don't want feature maps smaller than  4  in the bottleneck
the transposed median shape of the dataset is  [134. 165. 133.]
[{'batch_size': 48, 'num_pool_per_axis': [5, 5], 'patch_size': array([192, 160]), 'median_patient_size_in_voxels': array([134, 165, 133]), 'current_spacing': array([1., 1., 1.]), 'original_spacing': array([1., 1., 1.]), 'pool_op_kernel_sizes': [[2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'conv_kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'do_dummy_2D_data_aug': False}]
Initializing to run preprocessing
npz folder: /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_raw/nnUNet_cropped_data/Task100_UH_brain_peds
output_folder: /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 143, 171, 129)} 
after: no resampling necessary
 {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 143, 171, 129)} 

normalization...
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 127, 162, 130)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 127, 162, 130)} 

normalization...
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 136, 171, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 136, 171, 135)} 

normalization...
no resampling necessary
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 130, 165, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 130, 165, 135)} 

normalization...
normalization done
normalization done
normalization done
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 158, 165, 138)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 158, 165, 138)} 

normalization...
normalization done
no resampling necessary
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 155, 168, 133)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 155, 168, 133)} 

normalization...
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 132, 162, 130)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 132, 162, 130)} 

normalization...
3 4572
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/004.npz
normalization done
no resampling necessary
1 10000
3 9377
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/003.npz
1 10000
3 969
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/002.npz
normalization done
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 147, 169, 133)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 147, 169, 133)} 

normalization...
normalization done
1 10000
3 2964
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/001.npz
1 4408
3 8291
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/005.npz
1 7162
1 4976
3 5043
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/008.npz
normalization done
3 10000
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/006.npz
1 6859
3 341
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/009.npz
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 131, 157, 125)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 131, 157, 125)} 

normalization...
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 127, 165, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 127, 165, 135)} 

normalization done
normalization...
1 10000
3 10000
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/010.npz
normalization done
1 1855
3 10000
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task100_UH_brain_peds/nnUNetData_plans_v2.1_2D_stage0/011.npz

Any suggestions what could be the issue? I have attached the dataset.json file if helpful.

FabianIsensee commented 3 years ago

Your labels must be wrong. They must be consecutive integers. You do not have three classes in the dataset! All I see is class 1 and class 3:

1 1855 3 10000

where is class 2?

FabianIsensee commented 3 years ago

(nnU-Net does not print the dice scores for nans. I need to address that to avoid confusion)

rohan19250 commented 3 years ago

While converting the labels I printed out if the labels are converted or not and if they are present. So, I am just running on for 5 patients to set up the code. Below is my function(using essentially the one from here https://github.com/MIC-DKFZ/nnUNet/blob/master/nnunet/dataset_conversion/Task032_BraTS_2018.py). The code recognizes the old labels and prints out the new labels too.

I loaded up the labels in slicer and it seem to have 3 labels . not sure what is going wrong in here. Labels.zip

Read image and convert label:

                img = sitk.ReadImage(seg)
                img_npy = sitk.GetArrayFromImage(img)
                uniques , counts = np.unique(img_npy,return_counts = True)

                print("old_labels",uniques)
                print("old label counts", counts)
                #for u in uniques:
                 #   if u not in [0, 1, 2, 4, 5]:
                        #raise RuntimeError('unexpected label')
                seg_new = np.zeros_like(img_npy)
                ## change this only for consistent with BRATS labels, and using BRATS model
                #seg_new[img_npy == 5] = 2
                ## only for peds brain deep learning
                seg_new[img_npy == 5] = 2
                seg_new[img_npy == 4] = 1
                #seg_new[img_npy == 2] = 1
                seg_new[img_npy == 1] = 3
                uniques,counts = np.unique(seg_new,return_counts = True)
                print("new_labels",uniques)
                print("new label counts",counts)

003
003
003
003
/mnt/data/home/rxb362/projects/segmentation/UH_brain_peds/patients/003/all_label.nii.gz
old_labels [0 1 4 5]
old label counts [7695963    9377   16237    4743]
new_labels [0 1 2 3]
new label counts [7695963   16237    4743    9377]
16-bit signed integer
001
001
001
001
001
/mnt/data/home/rxb362/projects/segmentation/UH_brain_peds/patients/001/all_label.nii.gz
old_labels [0 1 4 5]
old label counts [6002857    2964   29970    1077]
new_labels [0 1 2 3]
new label counts [6002857   29970    1077    2964]
16-bit signed integer
010
010
010
010
/mnt/data/home/rxb362/projects/segmentation/UH_brain_peds/patients/010/all_label.nii.gz
old_labels [0 1 4 5]
old label counts [4373122   16696    7559    9407]
new_labels [0 1 2 3]
new label counts [4373122    7559    9407   16696]
16-bit signed integer
002
002
002
002
/mnt/data/home/rxb362/projects/segmentation/UH_brain_peds/patients/002/all_label.nii.gz
old_labels [0 1 4 5]
old label counts [6490156     969    8795    1764]
new_labels [0 1 2 3]
new label counts [6490156    8795    1764     969]
16-bit signed integer
008
008
008
008
/mnt/data/home/rxb362/projects/segmentation/UH_brain_peds/patients/008/all_label.nii.gz
old_labels [0 1 4 5]
old label counts [7725280    5043    3154    1822]
new_labels [0 1 2 3]
new label counts [7725280    3154    1822    5043]
16-bit signed integer
rohan19250 commented 3 years ago

I was able to solve this issue. Thanks Fabian for pointing out that ! I now noticed that the data integrity step was not taking the refreshed dataset and somehow using the older version only from its memory, which might have had 2 labels. I solved it by creating a new folder/new task and running the data integrity pre-processing again, which now shows the 3 labels correctly.

Verifying label values
Expected label values are [0, 1, 2, 3]
Labels OK
Dataset OK
008
001
003
002
010
before crop: (3, 152, 192, 151) after crop: (3, 131, 157, 125) spacing: [1. 1. 1.] 

before crop: (3, 188, 197, 163) after crop: (3, 130, 165, 135) spacing: [1. 1. 1.] 

before crop: (3, 203, 204, 157) after crop: (3, 136, 171, 135) spacing: [1. 1. 1.] 

before crop: (3, 219, 209, 169) after crop: (3, 155, 168, 133) spacing: [1. 1. 1.] 

before crop: (3, 219, 210, 168) after crop: (3, 143, 171, 129) spacing: [1. 1. 1.] 

 Task101_peds
number of threads:  (8, 8) 

using nonzero mask for normalization
using nonzero mask for normalization
using nonzero mask for normalization
Are we using the nonzero mask for normalizaion? OrderedDict([(0, True), (1, True), (2, True)])
the median shape of the dataset is  [136. 168. 133.]
the max shape in the dataset is  [155. 171. 135.]
the min shape in the dataset is  [130. 157. 125.]
we don't want feature maps smaller than  4  in the bottleneck
the transposed median shape of the dataset is  [136. 168. 133.]
generating configuration for 3d_fullres
{0: {'batch_size': 2, 'num_pool_per_axis': [5, 5, 4], 'patch_size': array([128, 160, 112]), 'median_patient_size_in_voxels': array([136, 168, 133]), 'current_spacing': array([1., 1., 1.]), 'original_spacing': array([1., 1., 1.]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 1]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}}
transpose forward [0, 1, 2]
transpose backward [0, 1, 2]
Initializing to run preprocessing
npz folder: /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_raw/nnUNet_cropped_data/Task101_peds
output_folder: /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task101_peds
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 131, 157, 125)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 131, 157, 125)} 

no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 130, 165, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 130, 165, 135)} 

no resampling necessary
no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 143, 171, 129)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 143, 171, 129)} 

no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 136, 171, 135)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 136, 171, 135)} 

no resampling necessary
no resampling necessary
before: {'spacing': array([1., 1., 1.]), 'spacing_transposed': array([1., 1., 1.]), 'data.shape (data is transposed)': (3, 155, 168, 133)} 
after:  {'spacing': array([1., 1., 1.]), 'data.shape (data is resampled)': (3, 155, 168, 133)} 

1 7559
2 9407
3 10000
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task101_peds/nnUNetData_plans_v2.1_stage0/010.npz
1 10000
2 1077
3 2964
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task101_peds/nnUNetData_plans_v2.1_stage0/001.npz
1 10000
1 8795
2 4743
2 1764
3 9377
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task101_peds/nnUNetData_plans_v2.1_stage0/003.npz
1 3154
3 969
saving:  /mnt/data/home/rxb362/projects/segmentation/nnunet_dl/nnUNet_preprocessed/Task101_peds/nnUNetData_plans_v2.1_stage0/002.npz
2 1822
3 5043

Now I am getting dice scores for 3 classes. :)

FabianIsensee commented 3 years ago

Nice! Glad to hear it worked now