luckieucas / FLARE23

3 stars 0 forks source link

Clarification Needed: Reproducing Results and Training Instructions #1

Open MadhuSaran26 opened 5 months ago

MadhuSaran26 commented 5 months ago

Hello,

I'm currently examining the possibility of reproducing your results. I understand your method comprises two stages. However, the execution instructions for Stage 1 / Phase 1 of your model seems to be missing   It appears that the README.md document only covers instructions for Stage 2. I would greatly appreciate it if you could address the following queries:  

  1. Which Trainer file should be used for Stage 1?
  2. How can I select 1000 predictions from the initial 1800 after Stage 1?
  3. What is the process for selecting the 100 tumor pseudo labels with the lowest uncertainty score?
  4. How do I execute the data cleaning step that precedes Stage 2?
  5. Should the data for Stage 2 (Dataset012, according to your README.md) include the partially labeled 2200 images along with the 1000 image predictions from Stage 1?
  6. Should the 2200 partially labeled images also be fully annotated before training for Stage 2?
  7. Will the remaining 1000 images used for Stage 2 have only tumor annotations?   Your guidance and responses to these questions would be immensely helpful.

Thank you in advance for your help.

luckieucas commented 4 months ago

Reply:

  1. we use ‘ nnUNetTrainerFlare ’ as the trainer for stage 1. We use this script to train stage 1: python run_training_Flare.py 2 3d_fullres 1 -tr nnUNetTrainerFlare
  2. you can see select_pseudo_label.py.
  3. you can refer select_pseudo_label.py and uncomment code line 23, 26 and 29.
  4. you can refer section 2.2 in our paper: Data-cleaning for robust training
  5. Yes
  6. no, the 2200 partially labeled images remain partially labeled in Stage 2
  7. The remaining 1000 images have all the organ annotations. and the 100 of this remaining 1000 also have tumor annotations. You can refer ‘Context-aware CutMix for online tumor augmentation’ in Section 2.2.
MadhuSaran26 commented 4 months ago

Hi, thank you so much for your reply. Your answers helped me understand your model a lot better.

I have a few follow-up questions for your answers.

  1. we use ‘ nnUNetTrainerFlare ’ as the trainer for stage 1. We use this script to train stage 1: python run_training_Flare.py 2 3d_fullres 1 -tr nnUNetTrainerFlare
nnUNetv2_plan_and_preprocess -d 2 --verify_dataset_integrity -np 4 -c 3d_fullres -overwrite_target_spacing 2.5 0.82 0.82

nnUNetv2_plan_and_preprocess -d 2 --verify_dataset_integrity -np 4 -c 3d_lowres -overwrite_target_spacing 2.5 0.82 0.82

nnUNetv2_plan_and_preprocess -d 2 --verify_dataset_integrity -np 4 -c 2d -overwrite_target_spacing 2.5 0.82 0.82

I have used the above steps for preprocessing and planning, and I'm getting the following error while training stage 1 with nnUNetTrainerFlare. I suspect that the error could be because of the planner that I'm using.

Screenshot 2024-03-10 at 9 26 31 PM
  1. you can refer select_pseudo_label.py and uncomment code line 23, 26 and 29.

In the "select_pseudo_label.py", I noticed the following directories. It seems to denote the usage of different trainer (nnUNetTrainerFlareMergeProb) and planners (3d_midres and 3d_verylowres).

    pseudo_label_path1 = join(nnUNet_results,"Dataset002_FLARE2023/nnUNetTrainerFlareMergeProb__nnUNetPlansSmall__3d_verylowres/fold_all/validation")
    pseudo_label_path2 = join(nnUNet_results,"Dataset002_FLARE2023/nnUNetTrainerFlareMergeProb__nnUNetPlans__3d_midres/fold_all/validation_fold1")
    pseudo_label_path3 = join(nnUNet_results,"Dataset002_FLARE2023/nnUNetTrainerFlareMergeProb__nnUNetPlans__3d_midres/fold_all/unlabeled_data_pred_by_fold_all")

**1. Could you please point me to the planners and trainers that you used for both the stages 1 and 2 of your model?

  1. Could you please also detail on how I can get the above stated directories that are used in the "select_pseudo_label.py" file?**
luckieucas commented 4 months ago
  1. I apologize for any mistakes. I have updated the 'nnUNetPlans.json' file. For stage 1, please use the following script: 'python run_training_Flare.py 2 3d_midres 0 -tr nnUNetTrainerFlareMergeProb'. For stage 2, use the script: 'python run_training_Flare.py 2 3d_mylowres 0 -tr nnUNetTrainerFlarePseudoCutUnsupLow'.
  2. I have uploaded these directories to Google Drive: https://drive.google.com/drive/folders/1FlRBsnnKPM9UrQsj_GeOw-C-S9iGHTQu?usp=sharing. Additionally, if you prefer to obtain these directories by training the model yourself, please make sure to use the following settings: trainer 'nnUNetTrainerFlareMergeProb', plans file 'nnUNetPlansSmall.json' or 'nnUNetPlans.json', configuration with '3d_verylowres' or '3d_midres', and train all folds.
MadhuSaran26 commented 4 months ago

Hi, thank you for the reply.

I've downloaded the latest plan files and I'm re-running the preprocessing using the following commands, as per your suggestion in the 2nd point.

` nnUNetv2_plan_and_preprocess -d 2 -np 4 -c 3d_mylowres

nnUNetv2_plan_and_preprocess -d 2 -np 4 -c 3d_midres

nnUNetv2_plan_and_preprocess -d 2 -np 4 -c 3d_verylowres`

I'll reach out to you, in case the issue with training persists.

Thank you once again for your time and support.