ivadomed / ivadomed

Repository on the collaborative IVADO medical imaging project between the Mila and NeuroPoly labs.
https://ivadomed.org
MIT License
155 stars 149 forks source link

Possible to input segmentation to constrain model training/inference? #1259

Open jcohenadad opened 1 year ago

jcohenadad commented 1 year ago

Motivation for the feature

ivadomed offers a cascaded approach to improve performance (eg: first model segments the spinal cord, second model uses first model to create a bounding box and train lesion segmentation model inside the bounding box).

However, it is quite cumbersome to have to specify an ivadomed-specific model as a first step in the cascaded approach. A more elegant and modular approach would be to specify a segmentation as input, segmentation which would then be used for creating the bounding box for model training/inference.

Maybe this feature already exists? If not, would it be 'complicated' to add it?

EDIT 20230104_105315: The argument fname_prior seems like a good candidate for such feature:

https://github.com/ivadomed/ivadomed/blob/199eba68abe2f3d6627edd72bae6f2aa3b35a45d/ivadomed/inference.py#L358-L361

mariehbourget commented 1 year ago

Interesting questions! I went through the different possibilities and here what exist as of now depending on the context:

For training/testing

Two possibilities:

  1. A combination of object_detection_params and CenterCrop as described in the Cascaded architecture tutorial. In that case:
    • Before training/testing, ivadomed checks if a bounding_boxes.json file is present in the path_output (log directory).
    • If not, the model specified for the first step of the cascade architecture (detection model) is used to create detection_masks in the path_output as well as the bounding_boxes.json file that describe the bounding box specific for each source file.
    • Then the CenterCrop is used to crop at the center of those bounding boxes.
  2. A combination of roi_params and ROICrop. In that case:
    • A segmentation file with a specific suffix (ex: _seg-manual) and already present in the derivatives folder of the dataset is used as a reference.
    • Then the ROIcrop crops the image around the center of mass of the reference segmentation.

From what I gathered from your initial comment, the second possibilities seems to be what you are looking for.

For segmentation

As you mentioned, the fname_prior option of segment_volume is the equivalent when it's time to segment an image with an already trained model.

fname_prior works with both types of model described above:

  1. With the cascaded architecture: the segmentation in the fname_prior file is used directly to create a boundinx box and then the CenterCrop applies.
  2. With ROI: the segmentation in the fname_prior file is used as the reference and then the ROICrop applies.

Note that the fname_prior option is currently not available via the --segment command. However, it is accessible via the segment_volume API, and the option seems to be used by SCT deepseg here.

Let me know if that answers your questions and/or if your context differs from what I listed here.

jcohenadad commented 1 year ago

From what I gathered from your initial comment, the second possibilities seems to be what you are looking for.

This is correct. The "roi_params" seems very close to what I have in mind. However, my understanding is that this option will crop the entire 3D volume, in order to create a smaller 3D volume, that will then enter the training pipeline. The problem with this approach, is that cropping a 3D volume in a 'single shot' will produce patches that are bigger than they could be if the cropping was done for each patch instead. In the example below, the segmentation (aka: ROI) is in white, the volume-based crop is in yellow and what I have in mind is in green:

image

In this example, the yellow crop will be further subdivided into 2D or 3D patches (depending if using 2D or 3D kernel), whereas the green crops are already the patches. I hope my explanation makes sense.

I understand this additional feature might be complicated to implement, but it is worth considering/discussing.

mariehbourget commented 1 year ago

2. A combination of roi_params and ROICrop. In that case:

* A segmentation file with a specific `suffix` (ex: `_seg-manual`) and already present in the derivatives folder of the dataset is used as a reference.
* Then the `ROIcrop` crops the image around the center of mass of the reference segmentation.

From what I gathered from your initial comment, the second possibilities seems to be what you are looking for.

After giving it some thoughts, I think that the ROI method is not implemented for 3D, and not for 2D with patches either:

In 2D, there is a roi_pair that is used later to compute the transform here. But with patches, we somehow dismiss the ROI later on here.

In 3D, there is only a seg_pair and no roi_pair... therefore no ROI transforms.

In this example, the yellow crop will be further subdivided into 2D or 3D patches (depending if using 2D or 3D kernel), whereas the green crops are already the patches. I hope my explanation makes sense.

I understand this additional feature might be complicated to implement, but it is worth considering/discussing.

Yes I understand what you mean, there would be 2 ways of doing this (ROIcrop before or after the patches).

I'm not sure about the "green" way. At some point, we need to have a continuous volume to reconstruct the prediction, which would be the "yellow" one. Also, keep in mind that patches are all the same size so I'm not sure how much "space" we would save in green. In any case, implementing the ROI method with patches (2D or 3D) would require some more serious time and investigations.

As for the "bounding box" method, from my understanding, it would give the "yellow" results. However, modifying it to take segmentation files instead of a model that does segmentation on the fly will bring other loading issues. We would need to index/load/pair the segmentation files with the source files... and loading new pair of files is, let's say, delicate 😅.

jcohenadad commented 1 year ago

Amazing! Thank you for the very helpful guidance @mariehbourget . Not surprisingly the 'green option' seems tricky to implement, and as you said the gain might not be that large, so let's leave it as a feature request for now.

Making it possible to input ROIs and use them as BB would be quite useful, I think. I'm happy to dive into the pairing delicacy, with your help and the help of @dyt811 . I hope students will also help with the feature implementation.

jcohenadad commented 1 year ago

Related to https://github.com/ivadomed/ivadomed/issues/1018, for which there is a draft PR https://github.com/ivadomed/ivadomed/pull/1052 @mariehbourget

mariehbourget commented 1 year ago

Related to #1018, for which there is a draft PR #1052 @mariehbourget

Yes, it's related in the sense that it provide an additional option to CenterCrop or ROICrop. However, what it does is sampling a given number of patches for training instead of covering the whole image/volume. It does not deal with prior information (model or segmentation) at all.

jcohenadad commented 1 year ago

BTW, this is what is already implemented for sct_deepseg_sc:

https://github.com/spinalcordtoolbox/spinalcordtoolbox/blob/1b5c49c3904dffe63546adbadc47759a53fbc8ca/spinalcordtoolbox/deepseg_/sc.py#L455-L460