After organizing data according to the tutorial in the GitHub, When I want to train the model this Error is popping up.
Could you help me please?
!nnUNet_train 3d_fullres nnUNetTrainerV2 Task501_BraTS 0
Please cite the following paper when using nnUNet:
Fabian Isensee, Paul F. Jäger, Simon A. A. Kohl, Jens Petersen, Klaus H. Maier-Hein "Automated Design of Deep Learning Methods for Biomedical Image Segmentation" arXiv preprint arXiv:1904.08128 (2020).
If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet
###############################################
I am running the following nnUNet: 3d_fullres
My trainer class is: <class 'nnunet.training.network_training.nnUNetTrainerV2.nnUNetTrainerV2'>
For that I will be using the following configuration:
num_classes: 3
modalities: {0: 'FLAIR'}
use_mask_for_norm OrderedDict([(0, True)])
keep_only_largest_region None
min_region_size_per_class None
min_size_per_class None
normalization_schemes OrderedDict([(0, 'nonCT')])
stages...
I am using stage 0 from these plans
I am using sample dice + CE loss
I am using data from this folder: /research/projects/Mana/GBM_PsP/nnUNet/nnunet/preprocessed/Task501_BraTS/nnUNetData_plans_v2.1
###############################################
2021-09-26 09:53:49.584807: Creating new split...
unpacking dataset
done
2021-09-26 09:53:51.585938: lr: 0.01
2021-09-26 09:54:32.590872: Unable to plot network architecture:
2021-09-26 09:54:32.592979: No module named 'graphviz'
2021-09-26 09:54:32.593699:
printing the network instead:
2021-09-26 09:54:32.601143:
epoch: 0
Traceback (most recent call last):
File "/home/m250962/anaconda3/envs/segmentation_env/bin/nnUNet_train", line 33, in
sys.exit(load_entry_point('nnunet', 'console_scripts', 'nnUNet_train')())
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/run/run_training.py", line 142, in main
trainer.run_training()
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/nnUNetTrainerV2.py", line 387, in run_training
ret = super().run_training()
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/nnUNetTrainer.py", line 320, in run_training
super(nnUNetTrainer, self).run_training()
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/network_trainer.py", line 445, in run_training
l = self.run_iteration(self.tr_gen, True)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/nnUNetTrainerV2.py", line 244, in run_iteration
loss = self.loss(output, target)
File "/home/m250962/anaconda3/envs/segmentation_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, kwargs)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/deep_supervision.py", line 39, in forward
l = weights[0] self.loss(x[0], y[0])
File "/home/m250962/anaconda3/envs/segmentation_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(input, kwargs)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/dice_loss.py", line 326, in forward
dc_loss = self.dc(net_output, target) if self.weight_dice != 0 else 0
File "/home/m250962/anaconda3/envs/segmentation_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/diceloss.py", line 180, in forward
tp, fp, fn, = get_tp_fp_fn_tn(x, y, axes, loss_mask, False)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/dice_loss.py", line 130, in get_tp_fp_fn_tn
yonehot.scatter(1, gt, 1)
RuntimeError: index 4 is out of bounds for dimension 1 with size 4
After organizing data according to the tutorial in the GitHub, When I want to train the model this Error is popping up. Could you help me please? !nnUNet_train 3d_fullres nnUNetTrainerV2 Task501_BraTS 0
Please cite the following paper when using nnUNet: Fabian Isensee, Paul F. Jäger, Simon A. A. Kohl, Jens Petersen, Klaus H. Maier-Hein "Automated Design of Deep Learning Methods for Biomedical Image Segmentation" arXiv preprint arXiv:1904.08128 (2020). If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet
############################################### I am running the following nnUNet: 3d_fullres My trainer class is: <class 'nnunet.training.network_training.nnUNetTrainerV2.nnUNetTrainerV2'> For that I will be using the following configuration: num_classes: 3 modalities: {0: 'FLAIR'} use_mask_for_norm OrderedDict([(0, True)]) keep_only_largest_region None min_region_size_per_class None min_size_per_class None normalization_schemes OrderedDict([(0, 'nonCT')]) stages...
stage: 0 {'batch_size': 2, 'num_pool_per_axis': [5, 5, 4], 'patch_size': array([128, 160, 112]), 'median_patient_size_in_voxels': array([136, 172, 136]), 'current_spacing': array([1., 1., 1.]), 'original_spacing': array([1., 1., 1.]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 1]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}
I am using stage 0 from these plans I am using sample dice + CE loss
I am using data from this folder: /research/projects/Mana/GBM_PsP/nnUNet/nnunet/preprocessed/Task501_BraTS/nnUNetData_plans_v2.1 ############################################### 2021-09-26 09:53:49.584807: Creating new split... unpacking dataset done 2021-09-26 09:53:51.585938: lr: 0.01 2021-09-26 09:54:32.590872: Unable to plot network architecture: 2021-09-26 09:54:32.592979: No module named 'graphviz' 2021-09-26 09:54:32.593699: printing the network instead:
2021-09-26 09:54:32.594548: Generic_UNet( (conv_blocks_localization): ModuleList( (0): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(640, 320, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (1): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(512, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (2): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (3): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (4): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (conv_blocks_context): ModuleList( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(1, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 64, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (2): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 128, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (3): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 256, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (4): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 320, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (5): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=(3, 3, 3), stride=(2, 2, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (td): ModuleList() (tu): ModuleList( (0): ConvTranspose3d(320, 320, kernel_size=(2, 2, 1), stride=(2, 2, 1), bias=False) (1): ConvTranspose3d(320, 256, kernel_size=(2, 2, 2), stride=(2, 2, 2), bias=False) (2): ConvTranspose3d(256, 128, kernel_size=(2, 2, 2), stride=(2, 2, 2), bias=False) (3): ConvTranspose3d(128, 64, kernel_size=(2, 2, 2), stride=(2, 2, 2), bias=False) (4): ConvTranspose3d(64, 32, kernel_size=(2, 2, 2), stride=(2, 2, 2), bias=False) ) (seg_outputs): ModuleList( (0): Conv3d(320, 4, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (1): Conv3d(256, 4, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (2): Conv3d(128, 4, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (3): Conv3d(64, 4, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (4): Conv3d(32, 4, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) ) ) 2021-09-26 09:54:32.599978:
2021-09-26 09:54:32.601143: epoch: 0 Traceback (most recent call last): File "/home/m250962/anaconda3/envs/segmentation_env/bin/nnUNet_train", line 33, in
sys.exit(load_entry_point('nnunet', 'console_scripts', 'nnUNet_train')())
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/run/run_training.py", line 142, in main
trainer.run_training()
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/nnUNetTrainerV2.py", line 387, in run_training
ret = super().run_training()
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/nnUNetTrainer.py", line 320, in run_training
super(nnUNetTrainer, self).run_training()
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/network_trainer.py", line 445, in run_training
l = self.run_iteration(self.tr_gen, True)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/network_training/nnUNetTrainerV2.py", line 244, in run_iteration
loss = self.loss(output, target)
File "/home/m250962/anaconda3/envs/segmentation_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, kwargs)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/deep_supervision.py", line 39, in forward
l = weights[0] self.loss(x[0], y[0])
File "/home/m250962/anaconda3/envs/segmentation_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(input, kwargs)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/dice_loss.py", line 326, in forward
dc_loss = self.dc(net_output, target) if self.weight_dice != 0 else 0
File "/home/m250962/anaconda3/envs/segmentation_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/diceloss.py", line 180, in forward
tp, fp, fn, = get_tp_fp_fn_tn(x, y, axes, loss_mask, False)
File "/research/projects/Mana/GBM_PsP/nnUNet/nnunet/training/loss_functions/dice_loss.py", line 130, in get_tp_fp_fn_tn
yonehot.scatter(1, gt, 1)
RuntimeError: index 4 is out of bounds for dimension 1 with size 4