MIC-DKFZ / nnUNet

Apache License 2.0
5.79k stars 1.74k forks source link

The segmentation file .nii.gz isn't exported in output folder #382

Closed AnoshMK closed 3 years ago

AnoshMK commented 3 years ago

Hi, @FabianIsensee

I installed nnUNet in a python virtual environment in windows. Now, I am trying to do prediction with Kits19 pretrained model using the following command: nnUNet_predict -i $nnUNet_raw_data_base/nnUNet_raw_data/Task048_KiTS_clean/imagesTs/ -o imagesRs/ -t Task048_KiTS_clean -m 3d_fullres --num_threads_preprocessing 0 I run the inference on case210_000.nii.gz in imagesTs folder. It gives me a message that inference done and now waiting for the segmentation export to finish. But, in postprocessing step I got this ERROR: The file "imagesRs/case210.nii.gz" does not exist. When I checked imagesRs folder, I just found two files: plans.pkl and postprocessing.json. I can't understand why the segmentation file case210.nii.gz isn't exported in output folder. Do you have any idea how to solve this issue?

Thank you, Rasha

This is the error: using model stored in D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2nnUNetPlansv2.1 This model expects 1 input modalities for each image Found 1 unique case ids, here are some examples: ['case210'] If they don't look right, make sure to double check your filenames. They must end with _0000.nii.gz etc number of cases: 1 number of cases that still need to be predicted: 1 emptying cuda cache loading parameters for folds, None folds is None so we will automatically look for output folders (not using 'all'!) found the following folds: ['D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2__nnUNetPlansv2.1\fold_0', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2nnUNetPlansv2.1\fold_1', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2nnUNetPlansv2.1\fold_2', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2__nnUNetPlansv2.1\fold_3', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2nnUNetPlansv2.1\fold_4'] using the following model files: ['D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2nnUNetPlansv2.1\fold_0\model_final_checkpoint.model', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2__nnUNetPlansv2.1\fold_1\model_final_checkpoint.model', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2nnUNetPlansv2.1\fold_2\model_final_checkpoint.model', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2__nnUNetPlansv2.1\fold_3\model_final_checkpoint.model', 'D:/nnUNet_Win1/nnUNet_trained_models\nnUNet\3d_fullres\Task048_KiTS_clean\nnUNetTrainerV2__nnUNetPlansv2.1\fold_4\model_final_checkpoint.model'] starting preprocessing generator starting prediction... inference done. Now waiting for the segmentation export to finish... postprocessing... Please cite the following paper when using nnUNet: Fabian Isensee, Paul F. Jäger, Simon A. A. Kohl, Jens Petersen, Klaus H. Maier-Hein "Automated Design of Deep Learning Methods for Biomedical Image Segmentation" arXiv preprint arXiv:1904.08128 (2020). If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "c:\users\user\appdata\local\programs\python\python38\lib\multiprocessing\pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "c:\users\user\appdata\local\programs\python\python38\lib\multiprocessing\pool.py", line 51, in starmapstar return list(itertools.starmap(args[0], args[1])) File "d:\nnunet_win1\nnunet_env\lib\site-packages\nnunet\postprocessing\connected_components.py", line 34, in load_remove_save img_in = sitk.ReadImage(input_file) File "d:\nnunet_win1\nnunet_env\lib\site-packages\SimpleITK\extra.py", line 346, in ReadImage return reader.Execute() File "d:\nnunet_win1\nnunet_env\lib\site-packages\SimpleITK\SimpleITK.py", line 5779, in Execute return _SimpleITK.ImageFileReader_Execute(self) RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: D:\a\1\sitk\Code\IO\src\sitkImageReaderBase.cxx:97: sitk::ERROR: The file "imagesRs/case210.nii.gz" does not exist.

FabianIsensee commented 3 years ago

Hi, looks like you are using Windows. Windows is not supported :-) Best, Fabian

Shyamapada34 commented 3 years ago

Hi Fabian, I have download the pretrained model and I want to predict the test images using the following commands:

  1. nnUNet_convert_decathlon_task -i /scratch/chemical/visitor/smandal.visitor/shyama/nnUNet_raw_data_base/nnUNet_raw_data/Task07_Pancreas
    1. nnUNet_predict -i $nnUNet_raw_data_base/nnUNet_raw_data/Task007_Pancreas/imagesTs/ -o OUTPUT_DIRECTORY -t 7 -m 3d_fullres Then it successfully completed by this way: Please cite the following paper when using nnUNet: Fabian Isensee, Paul F. Jäger, Simon A. A. Kohl, Jens Petersen, Klaus H. Maier-Hein "Automated Design of Deep Learning Methods for Biomedical Image Segmentation" arXiv preprint arXiv:1904.08128 (2020). If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet

using model stored in /scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2__nnUNetPlansv2.1 This model expects 1 input modalities for each image Found 1 unique case ids, here are some examples: ['pancreas_002'] If they don't look right, make sure to double check your filenames. They must end with _0000.nii.gz etc number of cases: 1 number of cases that still need to be predicted: 1 emptying cuda cache loading parameters for folds, None folds is None so we will automatically look for output folders (not using 'all'!) found the following folds: ['/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_0', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_1', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_2', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_3', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_4'] 2020-11-10 20:43:36.475079: Using dummy2d data augmentation using the following model files: ['/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_0/model_final_checkpoint.model', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_1/model_final_checkpoint.model', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_2/model_final_checkpoint.model', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_3/model_final_checkpoint.model', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_4/model_final_checkpoint.model'] starting preprocessing generator starting prediction... preprocessing OUTPUT_DIRECTORY/pancreas_002.nii.gz using preprocessor GenericPreprocessor before crop: (1, 81, 512, 512) after crop: (1, 81, 512, 512) spacing: [2.5 0.75390601 0.75390601] .................................................. data shape: (1, 81, 481, 481) patch size: [ 40 224 224] steps (x, y, and z): [[0, 14, 27, 41], [0, 86, 171, 257], [0, 86, 171, 257]] number of tiles: 64 using precomputed Gaussian prediction done inference done. Now waiting for the segmentation export to finish... force_separate_z: None interpolation order: 1 separate z: True lowres axis [0] separate z, order in z is 0 order inplane is 1 postprocessing...

I searched my RESULTS_FOLDER, but I didn't get any output. Kindly know me where I can get my predicted .nii.gz file where I have to postprocessing.

with regards Shyama

FabianIsensee commented 3 years ago

Hi Shyama, your predicted niftis will be located in OUTPUT_DIRECTORY (the one you specified by -o in the nnUNet_predict commend). For some reason it only found a single file in that folder. Can you please provide a screenshot (or ls -al) of the imagesTs folder? Best, Fabian

Shyamapada34 commented 3 years ago

Hi Fabian, Thanks for reply! I have tried so many times to predict the output using your pretrained model, but didn't get. I don't know where my mistake. I am running this work in our institute hpc server in linux platform. I kept the folders in scratch and configured them. I have followed the following steps:

  1. export PATH=$PATH:/home/chemical/visitor/smandal.visitor/.local/bin/
  2. export nnUNet_raw_data_base=/scratch/chemical/visitor/smandal.visitor/shyama/nnUNet_raw_data_base
  3. export nnUNet_preprocessed=/scratch/chemical/visitor/smandal.visitor/shyama/nnUNet_preprocessed
  4. export RESULTS_FOLDER=/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER
  5. export OUTPUT=/home/chemical/visitor/smandal.visitor/shyama/OUTPUT
  6. nnUNet_convert_decathlon_task -i /scratch/chemical/visitor/smandal.visitor/shyama/nnUNet_raw_data_base/nnUNet_raw_data/Task07_Pancreas
  7. nnUNet_predict -i $nnUNet_raw_data_base/nnUNet_raw_data/Task007_Pancreas/imagesTs/ -o OUTPUT -t 7 -m 3d_fullres

Then the following happened: Please cite the following paper when using nnUNet: Fabian Isensee, Paul F. Jäger, Simon A. A. Kohl, Jens Petersen, Klaus H. Maier-H ein "Automated Design of Deep Learning Methods for Biomedical Image Segmentation " arXiv preprint arXiv:1904.08128 (2020). If you have questions or suggestions, feel free to open an issue at https://gith ub.com/MIC-DKFZ/nnUNet

using model stored in /scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_ FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1 This model expects 1 input modalities for each image Found 6 unique case ids, here are some examples: ['pancreas_030' 'pancreas_031' 'pancreas_031' 'pancreas_512' 'pancreas_030' 'pancreas_033'] If they don't look right, make sure to double check your filenames. They must en d with _0000.nii.gz etc number of cases: 6 number of cases that still need to be predicted: 6 emptying cuda cache loading parameters for folds, None folds is None so we will automatically look for output folders (not using 'all'! ) found the following folds: ['/scratch/chemical/visitor/smandal.visitor/shyama/R ESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2. 1/fold_0', '/scratch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUN et/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_1', '/scrat ch/chemical/visitor/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task 007_Pancreas/nnUNetTrainerV2nnUNetPlansv2.1/fold_2', '/scratch/chemical/visito r/smandal.visitor/shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNe tTrainerV2nnUNetPlansv2.1/fold_3', '/scratch/chemical/visitor/smandal.visitor/ shyama/RESULTS_FOLDER/nnUNet/3d_fullres/Task007_Pancreas/nnUNetTrainerV2nnUNet Plansv2.1/fold_4'] 2020-11-14 16:10:23.559922: Using dummy2d data augmentation .......................................................................... ........................................................................ using precomputed Gaussian prediction done This worker has ended successfully, no errors to report predicting OUTPUT/pancreas_030.nii.gz force_separate_z: None interpolation order: 1 separate z: True lowres axis [0] debug: mirroring True mirror_axes (0, 1, 2) step_size: 0.5 do mirror: True data shape: (1, 190, 623, 623) patch size: [ 40 224 224] steps (x, y, and z): [[0, 19, 38, 56, 75, 94, 112, 131, 150], [0, 100, 200, 299, 399], [0, 100, 200, 299, 399]] number of tiles: 225 using precomputed Gaussian separate z, order in z is 0 order inplane is 1 prediction done debug: mirroring True mirror_axes (0, 1, 2) step_size: 0.5 do mirror: True data shape: (1, 190, 623, 623) patch size: [ 40 224 224] steps (x, y, and z): [[0, 19, 38, 56, 75, 94, 112, 131, 150], [0, 100, 200, 299, 399], [0, 100, 200, 299, 399]] number of tiles: 225 using precomputed Gaussian prediction done debug: mirroring True mirror_axes (0, 1, 2) step_size: 0.5 do mirror: True data shape: (1, 190, 623, 623) patch size: [ 40 224 224] steps (x, y, and z): [[0, 19, 38, 56, 75, 94, 112, 131, 150], [0, 100, 200, 299, 399], [0, 100, 200, 299, 399]] number of tiles: 225 using precomputed Gaussian prediction done debug: mirroring True mirror_axes (0, 1, 2) step_size: 0.5 do mirror: True data shape: (1, 190, 623, 623) patch size: [ 40 224 224] steps (x, y, and z): [[0, 19, 38, 56, 75, 94, 112, 131, 150], [0, 100, 200, 299, 399], [0, 100, 200, 299, 399]] number of tiles: 225 using precomputed Gaussian prediction done debug: mirroring True mirror_axes (0, 1, 2) step_size: 0.5 do mirror: True data shape: (1, 190, 623, 623) patch size: [ 40 224 224] steps (x, y, and z): [[0, 19, 38, 56, 75, 94, 112, 131, 150], [0, 100, 200, 299, 399], [0, 100, 200, 299, 399]] number of tiles: 225 using precomputed Gaussian prediction done This worker has ended successfully, no errors to report This worker has ended successfully, no errors to report This worker has ended successfully, no errors to report This worker has ended successfully, no errors to report inference done. Now waiting for the segmentation export to finish... force_separate_z: None interpolation order: 1 separate z: True lowres axis [0] separate z, order in z is 0 order inplane is 1 postprocessing...

But, I didn't get any file in the directory OUTPUT. The screenshot of ImagesTs is attached herewith. ImagesTs_scresnshot

Kindly guide me, how get predicted images. Is there any h5 file (like: nnunet_weight.h5) available for nnunet model for the Task07_Pancreas data. Thanks in advance.

with regards Shyama

FabianIsensee commented 3 years ago

Your files are named incorrectly. Please use the nnUNet_convert_decathlon_task command to convert datasets from the medical segmentation decathlon. Please also have a look at these readmes: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/dataset_conversion.md https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/data_format_inference.md Best, Fabian

FabianIsensee commented 3 years ago

If you would like to use a pretrained model, follow the main readme. There is a section dedicated to that: https://github.com/MIC-DKFZ/nnUNet#how-to-run-inference-with-pretrained-models Edit: there is no h5 file or onnx or something similar. Inference is too complicated for that and you need to use the nnUNet code for it to work correctly

Shyamapada34 commented 3 years ago

Hi Fabian, Thanks for answer. My problem is solved.

with regards Shyama

DISAPPEARED13 commented 2 years ago

I met the same problem while I use the trained models inference straightly, and I solved the problem, because I use the wrong model which just for 1 modal, and mine should be 4 modal. You should check the task name. :) @AnoshMK