Closed rohanbanerjee closed 4 months ago
Regarding choosing in between, the re-training
and fine-tuning
, I checked the results qualitatively and found that
fine-tuning
performs better in most cases like segmenting the first and last slice (re-training
inference misses segmenting the first and last slice in some cases). fine-tuning
learns the shapes of spinal cord better than the re-training
hence resulting in more precise segmentations.I have therefore come to the conclusion that fine-tuning
is better as a strategy for our problem, for this round of training and will be going ahead with the fine-tuning
in the next rounds of iterations too.
The list of 30 subjects chosen for manual corrections is below: qc_fail.yml.zip
The QC report for the manually corrected (done by @MerveKaptan and I) segmentations for the above 30 subjects is below:
@jcohenadad we would like your inputs for the above manually corrected images. We will use these images for the next round of training.
CC: @MerveKaptan
I cannot 'save all' (@joshuacwnewton would you mind looking into this?)
here are the saved FAIL and ARTIFACTS: Archive.zip
I cannot 'save all' (@joshuacwnewton would you mind looking into this?)
It looks as though this QC report may have been generated with mixed versions of SCT? (The copy of main.js
in the uploaded folder is missing some key functions needed for 'Save All' to work.)
It looks as though this QC report may have been generated with mixed versions of SCT? (The copy of main.js in the uploaded folder is missing some key functions needed for 'Save All' to work.)
I did a git fetch and a git pull before using SCT to generate the above report. When I run sct_check_dependencies
, it shows the this SHA: git-master-7b8600645b9df14d18e79d3f78bc0c9fe80c3199
@MerveKaptan @rohanbanerjee every time i label images as "artifact", do you add these files in the exclude.yml file? Please document these changes with a cross-ref to my comments where i link the commented QC reports
@MerveKaptan @rohanbanerjee every time i label images as "artifact", do you add these files in the exclude.yml file? Please document these changes with a cross-ref to my comments where i link the commented QC reports
Yes, all the artifact images are tracked here: https://github.com/sct-pipeline/fmri-segmentation/issues/25#issuecomment-2050305016
Closing the issue since the round 1 training was successfully completed (including running inference and manual correction)
Continuation from the previous round of training: https://github.com/sct-pipeline/fmri-segmentation/issues/34
What is the round 1 model
The model which was trained on ✅ as per the QCs mentioned in #34 is the
round 1
model. A total of 30 images were added in the training of this model since we fine-tuned the previously trainedbaseline
model.The models were trained in 2 different settings (explained in #36 ) -- 1 model for the
fold_all
model (discussion can be found here - https://github.com/MIC-DKFZ/nnUNet/issues/1364#issuecomment-1492075312) -- training with 126 images (baseline
data + manually corrected data) which will be calledre-training
from now on and 1 fine-tuning model which will be calledfine-tuning
from now on.A list of subjects (for later reference) used for the
re-training
is below: retraining.jsonA list of subjects used for the
fine-tuning
is below: finetuning.jsonThe config (containing preprocessing, hyperparameters) for nnUNetv2 training is:
Config file for re-training: plans.json
Config file for fine-tuning: plans.json
The steps to reproduce the above QC results (/run inference) are the following:
cd fmri-segmentation
Next steps:
held-out test set
(#33)