yeerwen / UniSeg

MICCAI 2023 Paper (Early Acceptance)
Other
161 stars 5 forks source link

About Prediction on New Data #10

Closed xixihean closed 1 year ago

xixihean commented 1 year ago

Thanks for you work. I want to ask how to copy Upstream/nnunet to replace nnunet. I have no idea.

yeerwen commented 1 year ago

If you have installed nnunet using the anaconda framework, you will find a directory named 'nnunet' in the 'anaconda3/envs/your_envs/lib/python3.8/site-packages/' directory. Next, we need to replace it with the 'Upstream/nnunet' directory provided in our code.

xixihean commented 1 year ago

If you have installed nnunet using the anaconda framework, you will find a directory named 'nnunet' in the 'anaconda3/envs/your_envs/lib/python3.8/site-packages/' directory. Next, we need to replace it with the 'Upstream/nnunet' directory provided in our code.

I see. Thanks.

xixihean commented 1 year ago

Hello! I still have question. When loading trainer in function load_model_and_checkpoint_files, it need a "checkpoint_name.model.pkl", but you just release the "checkpoint_name.model". What can I do?

yeerwen commented 1 year ago

Could you provide more details such as the command and screenshots of error reporting? This information will help me to understand how I can help you.

Heanhu commented 1 year ago

Traceback (most recent call last): File "/anaconda3/envs/heve/bin/nnUNet_predict", line 8, in sys.exit(main()) File "/anaconda3/envs/heve/lib/python3.9/site-packages/nnunet/inference/predict_simple.py", line 228, in main predict_from_folder(model_folder_name, input_folder, output_folder, folds, save_npz, num_threads_preprocessing, File "/anaconda3/envs/heve/lib/python3.9/site-packages/nnunet/inference/predict.py", line 713, in predict_from_folder return predict_cases(model, list_of_lists[part_id::num_parts], output_files[part_id::num_parts], folds, File "/anaconda3/envs/heve/lib/python3.9/site-packages/nnunet/inference/predict.py", line 186, in predict_cases trainer, params = load_model_and_checkpoint_files(model, folds, mixed_precision=mixed_precision, File "/anaconda3/envs/heve/lib/python3.9/site-packages/nnunet/training/model_restore.py", line 141, in load_model_and_checkpoint_files trainer = restore_model(join(folds[0], "%s.model.pkl" % checkpoint_name), fp16=mixed_precision) File "/anaconda3/envs/heve/lib/python3.9/site-packages/nnunet/training/model_restore.py", line 56, in restore_model info = load_pickle(pkl_file) File "/anaconda3/envs/heve/lib/python3.9/site-packages/batchgenerators/utilities/file_and_folder_operations.py", line 57, in load_pickle with open(file, mode) as f: FileNotFoundError: [Errno 2] No such file or directory: '/UniSeg/nnUNet_trained_models/UniSeg_Trainer/3d_fullres/Task97_MOTS/UniSeg_Trainer__DoDNetPlans/fold_0/model_best.model.pkl' I store the pretrained weights here.

yeerwen commented 1 year ago

I have updated the 'README' file to include a download link for the .pkl file and some necessary instructions. Thanks for your reminder. Moreover, if upstream training is performed, a .pkl file is also available in the output path.

Heanhu commented 1 year ago

Thanks! I've managed to predict successfully, but there are still two small problems.

  1. First, my segmentation target is hepatic vessels as well as tumors. But why does the segmentation result in four classifications? The last classification occupies a very small percentage, only a few dozen pixels.
  2. Second, the input private CT image I input is from neck to pelvis, when segmenting hepatic vessels as well as tumors, in the pelvis which is far away from the liver is mistaken as a tumor, doesn't nnunet preprocess the images automatically? Or do I need to crop the images myself first to get better results? The results were quite good when I tested on the 3Dircadb1 dataset.
yeerwen commented 1 year ago

Thank you for your questions. Great! In fact, I have found these issues in our UniSeg as well, and they seriously hinder the practicality and robustness of UniSeg. For the first question, the reason is that the number of output channels in UniSeg is set to the maximum number of classes among all tasks (BraTS21 has 4 classes), and when the number of classes of the ongoing task is less than 4, we don't add any constraints for the additional output channels during the training process. As a result, there is a possibility to get errors. We have updated the code on the prediction on new data to remove predicted outputs that are not relevant to a specific task, you are welcome to use it and feel free to let me know your results. For the second question, I first agree with you that cropping the image based on a priori information is a practical way to avoid over-segmentation. In addition, introducing more training data describing the pelvis and its surroundings might help the network get better results.

angolin22 commented 1 year ago

hello, when I used the model trained by my dataset to make predictions, I also encountered the first problem mentioned by the above students. Do I need to update the code training again?

yeerwen commented 1 year ago

hello, when I used the model trained by my dataset to make predictions, I also encountered the first problem mentioned by the above students. Do I need to update the code training again?

No. You only need to update the code and execute the code of prediction on new data.

xixihean commented 1 year ago

Thank you for your questions. Great! In fact, I have found these issues in our UniSeg as well, and they seriously hinder the practicality and robustness of UniSeg. For the first question, the reason is that the number of output channels in UniSeg is set to the maximum number of classes among all tasks (BraTS21 has 4 classes), and when the number of classes of the ongoing task is less than 4, we don't add any constraints for the additional output channels during the training process. As a result, there is a possibility to get errors. We have updated the code on the prediction on new data to remove predicted outputs that are not relevant to a specific task, you are welcome to use it and feel free to let me know your results. For the second question, I first agree with you that cropping the image based on a priori information is a practical way to avoid over-segmentation. In addition, introducing more training data describing the pelvis and its surroundings might help the network get better results.

Thanks for your timely reply. I cropped my own dataset and it got a better result.

angolin22 commented 1 year ago

hello, when I used the model trained by my dataset to make predictions, I also encountered the first problem mentioned by the above students. Do I need to update the code training again?

No. You only need to update the code and execute the code of prediction on new data.

Oh, it works. But I have another question how to run postprocessing?

WARNING! Cannot run postprocessing because the postprocessing file is missing. Make sure to run consolidate_folds in the output folder of the model first!
The folder you need to run this in is /home/nnunetdev/Uniseg2/UniSeg/Datasets/nnUNet_results/UniSeg_Trainer/3d_fullres/Task333_3task/UniSeg_Trainer__DoDNetPlans