282857341 / nnFormer

MIT License
798 stars 84 forks source link

Problems relevant to using model_best weight for synapse task. #79

Closed GYDDHPY closed 2 years ago

GYDDHPY commented 2 years ago

Hi there,when using the pretrained model_best weight relevant to synapse task,some error occur.

No such file or directory: '/home/xychen/new_transformer/nnUNetFrame/DATASET/nnUNet_preprocessed/Task002_Abdomen/nnUNetPlansv2.1_plans_3D.pkl'

I thought there maybe some setting errors but I coundn't find it.

Could you please tell me which part setting lead to such error and how to fix it?

Looking forward to your reply.

282857341 commented 2 years ago

Can you provide me with the complete error information, we want to know which step turns to this result

GYDDHPY commented 2 years ago

Can you provide me with the complete error information, we want to know which step turns to this result

Sorry for missing error report details.The report in the terminal when runing inference is inclueded as follow:

using model stored in work_0/Medical_image_analysis/segmentation/nnFormer/DATASET/nnFormer_trained_models/nnFormer/3d_fullres/Task002_Synapse/nnFormerTrainerV2_nnformer_synapse__nnFormerPlansv2.1 This model expects 1 input modalities for each image Found 12 unique case ids, here are some examples: ['img0038' 'img0029' 'img0004' 'img0036' 'img0004' 'img0038' 'img0001' 'img0004' 'img0022' 'img0032'] If they don't look right, make sure to double check your filenames. They must end with _0000.nii.gz etc number of cases: 12 number of cases that still need to be predicted: 12 emptying cuda cache loading parameters for folds, [0] Traceback (most recent call last): File "/root/miniconda3/envs/nnFormer/bin/nnFormer_predict", line 33, in <module> sys.exit(load_entry_point('nnformer', 'console_scripts', 'nnFormer_predict')()) File "/webdav/MyData/jupyterlab/work_0/Medical_image_analysis/segmentation/nnFormer/nnFormer/nnformer/inference/predict_simple.py", line 232, in main step_size=step_size, checkpoint_name=args.chk) File "/webdav/MyData/jupyterlab/work_0/Medical_image_analysis/segmentation/nnFormer/nnFormer/nnformer/inference/predict.py", line 637, in predict_from_folder segmentation_export_kwargs=segmentation_export_kwargs) File "/webdav/MyData/jupyterlab/work_0/Medical_image_analysis/segmentation/nnFormer/nnFormer/nnformer/inference/predict.py", line 185, in predict_cases trainer, params = load_model_and_checkpoint_files(model, folds, mixed_precision=mixed_precision, checkpoint_name=checkpoint_name) File "/webdav/MyData/jupyterlab/work_0/Medical_image_analysis/segmentation/nnFormer/nnFormer/nnformer/training/model_restore.py", line 146, in load_model_and_checkpoint_files trainer = restore_model(join(folds[0], "%s.model.pkl" % checkpoint_name), fp16=mixed_precision) File "/webdav/MyData/jupyterlab/work_0/Medical_image_analysis/segmentation/nnFormer/nnFormer/nnformer/training/model_restore.py", line 96, in restore_model trainer = tr(*init) File "/webdav/MyData/jupyterlab/work_0/Medical_image_analysis/segmentation/nnFormer/nnFormer/nnformer/training/network_training/nnFormerTrainerV2_nnformer_synapse.py", line 56, in __init__ self.load_plans_file() File "/webdav/MyData/jupyterlab/work_0/Medical_image_analysis/segmentation/nnFormer/nnFormer/nnformer/training/network_training/nnFormerTrainer_synapse.py", line 326, in load_plans_file self.plans = load_pickle(self.plans_file) File "/root/miniconda3/envs/nnFormer/lib/python3.6/site-packages/batchgenerators/utilities/file_and_folder_operations.py", line 49, in load_pickle with open(file, mode) as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/xychen/new_transformer/nnUNetFrame/DATASET/nnUNet_preprocessed/Task002_Abdomen/nnUNetPlansv2.1_plans_3D.pkl'

The model_best weight of synapse task used in the inference is downloded from the link provided in the repo.In order to distinct the official file and the model trained by myself,I changed the official model name.I hope this won't lead to such error.

Thank you for your attention.

Best wishes.

282857341 commented 2 years ago

The situation occurs because the inference code read the model.best.pkl, and I update the code of nnformer/training/model_restore.py and nnformer/inference/predict_simple.py and you can paste it directly. And I still expect your feedback if there are still any problems.

282857341 commented 2 years ago

Maybe you should change the name to model_best_official.model.pkl? The pkl file should be ended with xxx.model.pkl

GYDDHPY commented 2 years ago

The situation occurs because the inference code read the model.best.pkl, and I update the code of nnformer/training/model_restore.py and nnformer/inference/predict_simple.py and you can paste it directly. And I still expect your feedback if there are still any problems.

Maybe you should change the name to model_best_official.model.pkl? The pkl file should be ended with xxx.model.pkl

Hi,after using the updated file and the model.pkl file produced when I ran the code ,I successfully get the same results in your paper.

I appreciate your help very much.

Besides,I noticed that the result ran by myself is different from yours.I got lower average mean dice.The dice score of different organs also differes a lot. The results I get on my service are as follow:


Mean_Dice Dice_spleen0.8901956679255393 Dice_right_kidney0.8647417236245586 Dice_left_kidney0.886764651450738 Dice_gallbladder0.6772675115338549 Dice_liver0.9647827844838203 Dice_stomach0.8339267771015865 Dice_aorta0.9166707139476739 Dice_pancreas0.8147960606100776 Mean_hd hd_spleen16.700692695067122 hd_right_kidney9.98501719957393 hd_left_kidney16.26021461440303 hd_gallbladder10.48997061055664 hd_liver2.9164427440500087 hd_stomach27.828125186467364 hd_aorta4.621760283071231 hd_pancreas4.608103420070204


dsc:0.8561432363347312 hd:11.67629084415744


I found that the some random settings are not fixed.For example,torch.backends.cudnn.deterministic = False.
Is there any other reason cause such performance differences?

Looking foward to your reply.

Best wishes.