Open naga-karthik opened 9 months ago
Seems like pre-training (3D) model for compression detection is not learning anything (see pseudo dice 0) in the graph below
Maybe 2D model will work? OR, instead of segmenting one-voxel compression labels (which is hard to train), how about training a classification model for compression detection?
I don't think predicting a single voxel is robust enough, I am tagging @NathanMolinier who is working on labeling intervertebral discs and i'm sure has a lot to say about this.
My two cents: start with an object detection (to mitigate class imbalance), or a region-based seg, or a multi-channel input, in this case image & SC seg (@plbenveniste is working on this and can elaborate on pros/cons)
I don't think predicting a single voxel is robust enough,
You're right, thanks! Indeed, Naga and I have discussed internally that predicting a single voxel is probably not the way to go.
My two cents: start with an object detection (to mitigate class imbalance), or a region-based seg, or a multi-channel input, in this case image & SC seg (@plbenveniste is working on this and can elaborate on pros/cons)
Thanks for the ideas. We are currently considering a classification task (compressed/non-compressed slice) followed by placing the "compression pixel" during post-processing (for example, we automatically segment SC and then put the pixel into the SC center of mass). Cross-referencing relevant issue: https://github.com/spinalcordtoolbox/spinalcordtoolbox/issues/4333#issue-2079783443.
From my experience with the vertebral labeling project, using segmentation algorithm such as nnunet to identify single voxels is not relevant and not effective. However, you could still try to create spheres centered on your voxels to improve your performances but I'm not sure it will lead to incredible results either.
The main issue with single voxel detection is the loss function you want to use. Indeed, I am currently trying to replace the dice loss function with other loss functions such as the mean square error to evaluate the distance error between the ground truth and predictions.
model pretrained on dcm-zurich for detecting compression sites and using those pre-trained weights to fine-tune a model for lesion segmentation on dcm-zurich-lesions-* datasets.
What about pre-training the model to segment SC and then fine-tuning it for lesion segmentation?
This issue intends to compare performances between the model trained from scratch on
dcm-zurich-lesions-*
(#1) vs. a model pretrained ondcm-zurich
for detecting compression sites and using those pre-trained weights to fine-tune a model for lesion segmentation ondcm-zurich-lesions-*
datasets.Pre-training and fine-tuning are done using nnUNet to get a baseline estimate for the model performance (and to see if this idea works at all)
Working branch:
nk/dcm-zurich-pretraining
Training script: https://github.com/ivadomed/model-seg-dcm/blob/nk/dcm-zurich-pretraining/nnunet/run_dcm_zurich_pretraining_and_finetuning.sh