fepegar / torchio

Medical imaging toolkit for deep learning
http://www.torchio.org
Apache License 2.0
2.04k stars 239 forks source link

Image intensity augmentations #1187

Open IDoCodingStuffs opened 1 month ago

IDoCodingStuffs commented 1 month ago

🚀 Feature Transforms that shift voxel intensity, such as intensity flipping (i.e. 1 - val, val in [0, 1]), cluster and remap, contrast jitter etc.

Motivation I am working on a spine segmentation problem on MRI images where I need to train a model to perform in multiple pulse sequence modalities, but training data has only a single modality. As such, models tend to pick up on intensity features and perform very poorly on different modalities with different intensity distributions. (see the middle image vs others)

image

Pitch Adding transforms to shift intensity features would allow models to pick up on shapes and contours rather than learning intensity values as features.

Alternatives One very basic approach could be something like this

class RandomFlipIntensity:
    def __call__(self, input: tio.Subject, p=0.5):
        if np.random.random() <= p:
            max_intensity = torch.max(input["image"].tensor)
            min_intensity = torch.min(input["image"].tensor)
            flipped_intensity = max_intensity - input["image"].tensor + min_intensity
            input = tio.Subject(
                image=tio.ScalarImage(tensor=flipped_intensity),
                segmentation=tio.LabelMap(tensor=input["segmentation"].tensor),
            )
        return input
romainVala commented 1 month ago

Hi

Transforming intensity in a physical plausible manner, is not easy, and there are very few effective transform ( the only one in torchio was RandomGamma, but from my experience, It was not effective enough) . The transform you propose sound good because it makes a radical change by inverting the intensity ... so ...worth to add

I would be very interested to know, if this is enough to generalize properly.

The best approach to get a contrast agnostic model, is Billot SynthSeg proposition, since it trains the model with really random contrast, and the results are impressive (it does segment any contrast !) but one need very good realistic label (including structure in the background), so it may not be easily applicable for your use case