Closed xixihean closed 1 year ago
If you have installed nnunet using the anaconda framework, you will find a directory named 'nnunet' in the 'anaconda3/envs/your_envs/lib/python3.8/site-packages/' directory. Next, we need to replace it with the 'Upstream/nnunet' directory provided in our code.
If you have installed nnunet using the anaconda framework, you will find a directory named 'nnunet' in the 'anaconda3/envs/your_envs/lib/python3.8/site-packages/' directory. Next, we need to replace it with the 'Upstream/nnunet' directory provided in our code.
I see. Thanks.
Hello! I still have question. When loading trainer in function load_model_and_checkpoint_files, it need a "checkpoint_name.model.pkl", but you just release the "checkpoint_name.model". What can I do?
Could you provide more details such as the command and screenshots of error reporting? This information will help me to understand how I can help you.
Traceback (most recent call last):
File "/anaconda3/envs/heve/bin/nnUNet_predict", line 8, in
I have updated the 'README' file to include a download link for the .pkl file and some necessary instructions. Thanks for your reminder. Moreover, if upstream training is performed, a .pkl file is also available in the output path.
Thanks! I've managed to predict successfully, but there are still two small problems.
Thank you for your questions. Great! In fact, I have found these issues in our UniSeg as well, and they seriously hinder the practicality and robustness of UniSeg. For the first question, the reason is that the number of output channels in UniSeg is set to the maximum number of classes among all tasks (BraTS21 has 4 classes), and when the number of classes of the ongoing task is less than 4, we don't add any constraints for the additional output channels during the training process. As a result, there is a possibility to get errors. We have updated the code on the prediction on new data to remove predicted outputs that are not relevant to a specific task, you are welcome to use it and feel free to let me know your results. For the second question, I first agree with you that cropping the image based on a priori information is a practical way to avoid over-segmentation. In addition, introducing more training data describing the pelvis and its surroundings might help the network get better results.
hello, when I used the model trained by my dataset to make predictions, I also encountered the first problem mentioned by the above students. Do I need to update the code training again?
hello, when I used the model trained by my dataset to make predictions, I also encountered the first problem mentioned by the above students. Do I need to update the code training again?
No. You only need to update the code and execute the code of prediction on new data.
Thank you for your questions. Great! In fact, I have found these issues in our UniSeg as well, and they seriously hinder the practicality and robustness of UniSeg. For the first question, the reason is that the number of output channels in UniSeg is set to the maximum number of classes among all tasks (BraTS21 has 4 classes), and when the number of classes of the ongoing task is less than 4, we don't add any constraints for the additional output channels during the training process. As a result, there is a possibility to get errors. We have updated the code on the prediction on new data to remove predicted outputs that are not relevant to a specific task, you are welcome to use it and feel free to let me know your results. For the second question, I first agree with you that cropping the image based on a priori information is a practical way to avoid over-segmentation. In addition, introducing more training data describing the pelvis and its surroundings might help the network get better results.
Thanks for your timely reply. I cropped my own dataset and it got a better result.
hello, when I used the model trained by my dataset to make predictions, I also encountered the first problem mentioned by the above students. Do I need to update the code training again?
No. You only need to update the code and execute the code of prediction on new data.
Oh, it works. But I have another question how to run postprocessing?
WARNING! Cannot run postprocessing because the postprocessing file is missing. Make sure to run consolidate_folds in the output folder of the model first!
The folder you need to run this in is /home/nnunetdev/Uniseg2/UniSeg/Datasets/nnUNet_results/UniSeg_Trainer/3d_fullres/Task333_3task/UniSeg_Trainer__DoDNetPlans
Thanks for you work. I want to ask how to copy Upstream/nnunet to replace nnunet. I have no idea.