Closed ChenglongWang closed 4 years ago
Hi Fabian, I am also working on KiTs19 with nnUNet. But when i training there are lots of errors occurs. I am using NVIDIA GeForce RTX 2080 Ti. I got this error
RuntimeError: cuda runtime error (30) : unknown error at /pytorch/aten/src/THC/THCGeneral.cpp:51
what are things i should consider (or change ) when i use nnUNet with NVIDIA GeForce RTX 2080 Ti and nnUNet. Thanks for your attention. I’m looking forward to your reply
Hi, nowhere in my paper I say that my result was generated with nnU-Net =) So it is not surprising that you cannot reproduce the results. The code for KiTS is based in nnU-Net but has some small modifications to make it better. I cannot share it right now because it is entangled with a different project - you will have to be patient ;-)
@abdulsatharst can you run any other pytorch code? That does not look like a nnU-Net related problem. Also please have a look at the readme. You need to have 12GB of GPU memory OR use the --fp16
option.
Best, Fabian
This means that you didnt prepare the data properly. Your labels must be consecutive integers [0, 1, 2, 3, ...]. Best, Fabian
@FabianIsensee Hey did you win Kits?
http://results.kits-challenge.org/miccai2019/
according to this, yes :-)
Amazing!
Hi Fabian, I have trained your network for DECATHLON challenge Task09_Spleen in Nvidia RTX 2080 Ti. But after two days, my system got stuck and I run around 340 epoch only. Now I am trying to test the model with the test set given by the challenge. I run the following comment
python3 inference/predict_simple.py -i /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs -o /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/PREDICTIONS -t Task09_Spleen -tr nnUNetTrainer -m 2d -f 1
But I got some errors
Please cite the following paper when using nnUNet:
Isensee, Fabian, et al. "nnU-Net: Breaking the Spell on Successful Medical Image Segmentation." arXiv preprint arXiv:1904.08128 (2019).
If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet
using model stored in /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/RESULTS/nnUNet/2d/Task09_Spleen/nnUNetTrainer__nnUNetPlans
emptying cuda cache
loading parameters for folds, [1]
using the following model files: ['/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/RESULTS/nnUNet/2d/Task09_Spleen/nnUNetTrainer__nnUNetPlans/fold_1/model_best.model']
starting preprocessing generator
starting prediction...
preprocessing /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/PREDICTIONS/spl.nii.gz
preprocessing /home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/PREDICTIONS/sple.nii.gz
This worker has ended successfully, no errors to report
This worker has ended successfully, no errors to report
This worker has ended successfully, no errors to report
This worker has ended successfully, no errors to report
error in ['/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_1.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_7.nii.gz']
all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 34 and the array at index 1 has size 114
This worker has ended successfully, no errors to report
error in ['/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_11.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_15.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_23.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_30.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_34.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_35.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_36.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_37.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_39.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_42.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_43.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_48.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_50.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_51.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_54.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_55.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_57.nii.gz', '/home/AI/Abdul_sathar/SEGMENTAION/nnUNet-master_v2/nnunet/DATA/BASE/nnUNet_raw/Task09_Spleen/imagesTs/spleen_58.nii.gz']
all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 157 and the array at index 1 has size 38
This worker has ended successfully, no errors to report
The Error is i think
all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 157 and the array at index 1 has size 38
I created a folder for writing results. But I found only plans.pkl in the folder after running the above comment. Actually what I need is image(mask) or any format that can be converted to Image format.
Thank you.
Test data need to be named the same as your train_splitted data. So the files must be spleen_1_0000.nii.gz, not spleen_1.nii.gt Best, Fabian
@ChenglongWang Hey can you please share whats the kidney and tumor dice your able to achieve with default nnunet? Thanks
Hi Fabian, Can I use this framework for Ultrasound liver/Kidney segmentation by making some modifications (such as changing the input file format etc.)?
Hi, I need some additional information to be able to help you. I assume your ultrasound data is 3D? If so, what format is it saved in? You should be able to convert any 3D image to nifti and then just use nnunet as it is. I do not recommend you change things because it can get quite complicated if you do. Best, Fabian
Hi @FabianIsensee, When we can expect you will include KiTs code in the repo?
Not very soon, I need to finish up some other things. The KiTS code will come with a new version of nnU-Net and there are a lot of things that need to be done until then
Hi Fabian
Now, I am trying to reproduce your score in KiTs19 challenge. I used your nnUnet architecture, augmentations, and other config in my own workflow. Unfortunately, I cannot reproduce your performance. I would be so grateful if you can update a new version for KiTs challenge.
Thank you so much. @ChenglongWang 你好,大佬请问能联系你一下吗,我也想复现一下kits的比赛
Hello @FabianIsensee , Is there anywhere your code for KiTs19 challenge available?
Thank you
We outperformed our KiTS2019 code with the most recent version of nnU-Net, so just use that :-)
Hi @FabianIsensee, I am trying to reproduce your KiTS2019 results, and struggling to get a good Lesion Dice Score. I am using the 3D residual Unet (_generic_modular_residualUNet.py) given in the nnU-Net repository with the preprocessing steps mentioned in your paper. I suspect the reason of low performance (60% Lesion Dice) to be tuning of some hyper-parameters inn my pipeline. I have tried to keep the configuration setting as same as possible but there is some information which I could not find in the paper. Therefore, I will be grateful if you could answer some of the following questions for me to help me figure out the issue in my work,
I look forward to hear from you.
Thank you.
Hi @ErumMushtaq , there really is no need to use the residual U-Net at all. Our most recent standard nnU-Net works better then that as you can see in our Nature Methods publication (results tables are in supplement). Just use the standard nnU-Net ;-)
Hi @FabianIsensee thank you so much for the prompt response. I have been having memory issues in recreating results for kits19 and they are essentially because of the intermediate files saved by nnUnet. Is there a simple way without saving intermediate files to generate the results for kits19. I also tried using the configurations settings mentioned in your Nature paper for nnUnet configurations and applied resampling, clipping and normalization and the data augmentations on data but unfortunately that did not give me better values for tumor segmentation.
I am not quite sure what you are trying to do. If you want to reproduce our results you can also just use the pretrained weights. If you want to train yourself then please tell me where you run out of memory. During preprocessing? There is no way around saving intermediary files. THey should not be a big problem, however. Maybe 100GB max
Hi Fabian
Now, I am trying to reproduce your score in KiTs19 challenge. I used your nnUnet architecture, augmentations, and other config in my own workflow. Unfortunately, I cannot reproduce your performance. I would be so grateful if you can update a new version for KiTs challenge.
Thank you so much.