Open mariem-m11 opened 1 month ago
I have the same problem as you. Have you solved it
/root/autodl-tmp/3dtransunet/configs/Brats/encoder_plus_decoder.yaml run on fold: 0
Please cite the following paper when using nnUNet:
Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. "nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation." Nat Methods (2020). https://doi.org/10.1038/s41592-020-01008-z
If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet
['/root/autodl-tmp/3dtransunet/training/network_training'] nnUNetTrainerV2_DDP nnunet.training.network_training ############################################### I am running the following nnUNet: 3d_fullres My trainer class is: <class 'nn_transunet.trainer.nnUNetTrainerV2_DDP.nnUNetTrainerV2_DDP'> For that I will be using the following configuration: I am using stage 0 from these plans I am using sample dice + CE loss
I am using data from this folder: /root/autodl-tmp/3dtransunet/data/nnUNet_preprocessed/Task082_BraTS2020/nnUNetData_plans_v2.1
###############################################
Traceback (most recent call last):
File "train.py", line 321, in
Please cite the following paper when using nnUNet:
Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. "nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation." Nat Methods (2020). https://doi.org/10.1038/s41592-020-01008-z
If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet
['/root/autodl-tmp/3dtransunet/training/network_training'] nnUNetTrainerV2_DDP nnunet.training.network_training ############################################### I am running the following nnUNet: 3d_fullres My trainer class is: <class 'nn_transunet.trainer.nnUNetTrainerV2_DDP.nnUNetTrainerV2_DDP'> For that I will be using the following configuration: I am using stage 0 from these plans I am using sample dice + CE loss
I am using data from this folder: /root/autodl-tmp/3dtransunet/data/nnUNet_preprocessed/Task082_BraTS2020/nnUNetData_plans_v2.1
###############################################
Traceback (most recent call last):
File "train.py", line 321, in
After modifying the nnUNetTrainerV2, I'm encountering an issue where it stops after the first epoch due to problems with handling loss functions. I've made several changes, but I'm still stuck. @princerice, have you found a solution to this problem?
I am attempting to train using a single GPU setup. I modified the config file to utilize
nnUNetTrainerV2
instead ofnnUNetTrainerV2_DDP
.And modified the train.sh like so: nnunet_use_progress_bar=1 CUDA_VISIBLE_DEVICES=0 torchrun ./train.py --task="Task180_BraTSMet" --fold=${fold} --config=$CONFIG --network="3d_fullres" --resume='' --local-rank=0 --optim_name="adam" --valbest --val_final --npz
The script throws a
KeyError
when trying to access thepatch_size
from theplan_data['plans']['plans_per_stage'][resolution_index]
.Error Message: