sct-pipeline / fmri-segmentation

Repository for the project on automatic spinal cord segmentation based on fMRI EPI data
MIT License
4 stars 1 forks source link

Training and inference discussion for active learning round 1 #35

Closed rohanbanerjee closed 4 months ago

rohanbanerjee commented 7 months ago

Continuation from the previous round of training: https://github.com/sct-pipeline/fmri-segmentation/issues/34

What is the round 1 model

The model which was trained on ✅ as per the QCs mentioned in #34 is the round 1 model. A total of 30 images were added in the training of this model since we fine-tuned the previously trained baseline model.

The models were trained in 2 different settings (explained in #36 ) -- 1 model for the fold_all model (discussion can be found here - https://github.com/MIC-DKFZ/nnUNet/issues/1364#issuecomment-1492075312) -- training with 126 images (baseline data + manually corrected data) which will be called re-training from now on and 1 fine-tuning model which will be called fine-tuning from now on.

A list of subjects (for later reference) used for the re-training is below: retraining.json

A list of subjects used for the fine-tuning is below: finetuning.json

The config (containing preprocessing, hyperparameters) for nnUNetv2 training is:

Config file for re-training: plans.json

Config file for fine-tuning: plans.json

The steps to reproduce the above QC results (/run inference) are the following:

  1. Clone this repos
  2. cd fmri-segmentation
  3. Download the model weights (the whole folder) from the link: https://drive.google.com/drive/folders/1WSn-15wGWz6i2_aZeQTwKls2sZ6dpfHf?usp=share_link
  4. Install dependencies
    pip install -r run_nnunet_inference_requirements.txt
  5. Run the command:
    python run_nnunet_inference.py --path-dataset <PATH TO FOLDER CONTAINING IMAGES, SUFFIXED WITH _0000> --path-out <PATH TO O/P FOLDER> --path-model <PATH TO DOWNLOADED WEIGHTS FOLDER>

Next steps:

rohanbanerjee commented 6 months ago

Regarding choosing in between, the re-training and fine-tuning, I checked the results qualitatively and found that

  1. fine-tuning performs better in most cases like segmenting the first and last slice (re-training inference misses segmenting the first and last slice in some cases).
  2. fine-tuning learns the shapes of spinal cord better than the re-training hence resulting in more precise segmentations.
  3. Discussions in #36 also suggest fine-tuning is a better choice.

I have therefore come to the conclusion that fine-tuning is better as a strategy for our problem, for this round of training and will be going ahead with the fine-tuning in the next rounds of iterations too.

rohanbanerjee commented 6 months ago

The list of 30 subjects chosen for manual corrections is below: qc_fail.yml.zip

The QC report for the manually corrected (done by @MerveKaptan and I) segmentations for the above 30 subjects is below:

qc_round_1_corrected.zip

@jcohenadad we would like your inputs for the above manually corrected images. We will use these images for the next round of training.

CC: @MerveKaptan

jcohenadad commented 6 months ago

I cannot 'save all' (@joshuacwnewton would you mind looking into this?)

here are the saved FAIL and ARTIFACTS: Archive.zip

joshuacwnewton commented 6 months ago

I cannot 'save all' (@joshuacwnewton would you mind looking into this?)

It looks as though this QC report may have been generated with mixed versions of SCT? (The copy of main.js in the uploaded folder is missing some key functions needed for 'Save All' to work.)

rohanbanerjee commented 6 months ago

It looks as though this QC report may have been generated with mixed versions of SCT? (The copy of main.js in the uploaded folder is missing some key functions needed for 'Save All' to work.)

I did a git fetch and a git pull before using SCT to generate the above report. When I run sct_check_dependencies, it shows the this SHA: git-master-7b8600645b9df14d18e79d3f78bc0c9fe80c3199

jcohenadad commented 6 months ago

@MerveKaptan @rohanbanerjee every time i label images as "artifact", do you add these files in the exclude.yml file? Please document these changes with a cross-ref to my comments where i link the commented QC reports

rohanbanerjee commented 6 months ago

@MerveKaptan @rohanbanerjee every time i label images as "artifact", do you add these files in the exclude.yml file? Please document these changes with a cross-ref to my comments where i link the commented QC reports

Yes, all the artifact images are tracked here: https://github.com/sct-pipeline/fmri-segmentation/issues/25#issuecomment-2050305016

rohanbanerjee commented 4 months ago

Closing the issue since the round 1 training was successfully completed (including running inference and manual correction)