MIC-DKFZ / nnUNet

Apache License 2.0
5.79k stars 1.74k forks source link

[Request] Configuration for KiTs19' #37

Closed ChenglongWang closed 4 years ago

ChenglongWang commented 5 years ago

Hi Fabian

Now, I am trying to reproduce your score in KiTs19 challenge. I used your nnUnet architecture, augmentations, and other config in my own workflow. Unfortunately, I cannot reproduce your performance. I would be so grateful if you can update a new version for KiTs challenge.

Thank you so much.

abdulsatharst commented 5 years ago

Hi Fabian, I am also working on KiTs19 with nnUNet. But when i training there are lots of errors occurs. I am using NVIDIA GeForce RTX 2080 Ti. I got this error

RuntimeError: cuda runtime error (30) : unknown error at /pytorch/aten/src/THC/THCGeneral.cpp:51

what are things i should consider (or change ) when i use nnUNet with NVIDIA GeForce RTX 2080 Ti and nnUNet. Thanks for your attention. I’m looking forward to your reply

FabianIsensee commented 5 years ago

Hi, nowhere in my paper I say that my result was generated with nnU-Net =) So it is not surprising that you cannot reproduce the results. The code for KiTS is based in nnU-Net but has some small modifications to make it better. I cannot share it right now because it is entangled with a different project - you will have to be patient ;-)

@abdulsatharst can you run any other pytorch code? That does not look like a nnU-Net related problem. Also please have a look at the readme. You need to have 12GB of GPU memory OR use the --fp16 option.

Best, Fabian

FabianIsensee commented 5 years ago

This means that you didnt prepare the data properly. Your labels must be consecutive integers [0, 1, 2, 3, ...]. Best, Fabian

DecentMakeover commented 5 years ago

@FabianIsensee Hey did you win Kits?

FabianIsensee commented 5 years ago

http://results.kits-challenge.org/miccai2019/

according to this, yes :-)

DecentMakeover commented 5 years ago

Amazing!

abdulsatharst commented 5 years ago

Hi Fabian, I have trained your network for DECATHLON challenge Task09_Spleen in Nvidia RTX 2080 Ti. But after two days, my system got stuck and I run around 340 epoch only. Now I am trying to test the model with the test set given by the challenge. I run the following comment

python3 inference/predict_simple.py -i /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs -o /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/PREDICTIONS -t Task09_Spleen -tr nnUNetTrainer -m 2d -f 1

But I got some errors

Please cite the following paper when using nnUNet:

Isensee, Fabian, et al. "nnU-Net: Breaking the Spell on Successful Medical Image Segmentation." arXiv preprint arXiv:1904.08128 (2019).

If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet
using model stored in  /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/RESULTS/nnUNet/2d/Task09_Spleen/nnUNetTrainer__nnUNetPlans
emptying cuda cache
loading parameters for folds, [1]
using the following model files:  ['/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/RESULTS/nnUNet/2d/Task09_Spleen/nnUNetTrainer__nnUNetPlans/fold_1/model_best.model']
starting preprocessing generator
starting prediction...
preprocessing /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/PREDICTIONS/spl.nii.gz
preprocessing /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/PREDICTIONS/sple.nii.gz
This worker has ended successfully, no errors to report
This worker has ended successfully, no errors to report
This worker has ended successfully, no errors to report
This worker has ended successfully, no errors to report
error in ['/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_1.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_7.nii.gz']
all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 34 and the array at index 1 has size 114
This worker has ended successfully, no errors to report
error in ['/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_11.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_15.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_23.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_30.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_34.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_35.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_36.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_37.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_39.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_42.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_43.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_48.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_50.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_51.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_54.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_55.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_57.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_58.nii.gz']
all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 157 and the array at index 1 has size 38
This worker has ended successfully, no errors to report

The Error is i think

all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 157 and the array at index 1 has size 38

I created a folder for writing results. But I found only plans.pkl in the folder after running the above comment. Actually what I need is image(mask) or any format that can be converted to Image format.

Thank you.

FabianIsensee commented 5 years ago

Test data need to be named the same as your train_splitted data. So the files must be spleen_1_0000.nii.gz, not spleen_1.nii.gt Best, Fabian

DecentMakeover commented 5 years ago

@ChenglongWang Hey can you please share whats the kidney and tumor dice your able to achieve with default nnunet? Thanks

abdulsatharst commented 5 years ago

Hi Fabian, Can I use this framework for Ultrasound liver/Kidney segmentation by making some modifications (such as changing the input file format etc.)?

FabianIsensee commented 5 years ago

Hi, I need some additional information to be able to help you. I assume your ultrasound data is 3D? If so, what format is it saved in? You should be able to convert any 3D image to nifti and then just use nnunet as it is. I do not recommend you change things because it can get quite complicated if you do. Best, Fabian

vatsal-sodha commented 5 years ago

Hi @FabianIsensee, When we can expect you will include KiTs code in the repo?

FabianIsensee commented 5 years ago

Not very soon, I need to finish up some other things. The KiTS code will come with a new version of nnU-Net and there are a lot of things that need to be done until then

JiangYuhan1996 commented 4 years ago

Hi Fabian

Now, I am trying to reproduce your score in KiTs19 challenge. I used your nnUnet architecture, augmentations, and other config in my own workflow. Unfortunately, I cannot reproduce your performance. I would be so grateful if you can update a new version for KiTs challenge.

Thank you so much. @ChenglongWang 你好,大佬请问能联系你一下吗,我也想复现一下kits的比赛

SpyridoulaZagkou4 commented 3 years ago

Hello @FabianIsensee , Is there anywhere your code for KiTs19 challenge available?

Thank you

FabianIsensee commented 3 years ago

We outperformed our KiTS2019 code with the most recent version of nnU-Net, so just use that :-)

ErumMushtaq commented 2 years ago

Hi @FabianIsensee, I am trying to reproduce your KiTS2019 results, and struggling to get a good Lesion Dice Score. I am using the 3D residual Unet (_generic_modular_residualUNet.py) given in the nnU-Net repository with the preprocessing steps mentioned in your paper. I suspect the reason of low performance (60% Lesion Dice) to be tuning of some hyper-parameters inn my pipeline. I have tried to keep the configuration setting as same as possible but there is some information which I could not find in the paper. Therefore, I will be grateful if you could answer some of the following questions for me to help me figure out the issue in my work,

  1. Did you use SGD optimizer with momentum? If yes, was the momentum value 0.9?
  2. Did you use lr scheduler as well? If yes, was it ReduceLROnPlateau?
  3. You mention you use 250 batches of batch size 2. Which means at each epoch we have 500 training examples, via the 5 fold split, I think there will be around 160 training examples for kits19 data, were 500 examples generated by applying augmentations on the original training data of total images 160?
  4. Did you use a square dice loss or the basic one for the DC and CE loss? And weight ratio for CE and DICE loss was 1?
  5. Any other information/advice that could help me reproduce the results.

I look forward to hear from you.

Thank you.

FabianIsensee commented 2 years ago

Hi @ErumMushtaq , there really is no need to use the residual U-Net at all. Our most recent standard nnU-Net works better then that as you can see in our Nature Methods publication (results tables are in supplement). Just use the standard nnU-Net ;-)

ErumMushtaq commented 2 years ago

Hi @FabianIsensee thank you so much for the prompt response. I have been having memory issues in recreating results for kits19 and they are essentially because of the intermediate files saved by nnUnet. Is there a simple way without saving intermediate files to generate the results for kits19. I also tried using the configurations settings mentioned in your Nature paper for nnUnet configurations and applied resampling, clipping and normalization and the data augmentations on data but unfortunately that did not give me better values for tumor segmentation.

FabianIsensee commented 2 years ago

I am not quite sure what you are trying to do. If you want to reproduce our results you can also just use the pretrained weights. If you want to train yourself then please tell me where you run out of memory. During preprocessing? There is no way around saving intermediary files. THey should not be a big problem, however. Maybe 100GB max