fepegar / torchio

Medical imaging toolkit for deep learning
https://torchio.org
Apache License 2.0
2.07k stars 241 forks source link

Allow controlling intensity of RandomMotion transform #765

Open dzenanz opened 2 years ago

dzenanz commented 2 years ago

🚀 Feature

Motivation

Intensity of RandomMotion transform seems to mostly depend on the "time" of the motion. And while RandomGhosting transform exposes intensity parameter, RandomMotion does not expose times parameter, and does not have intensity.

Pitch

Either provide intensity parameter, or allow setting the range of times parameter used internally.

Alternatives

Save the image before the transform is applied, then "manually" blend the transformed one into the original one with custom weight, thus emulating intensity.

Additional context

I am trying to augment training by creating bad images for an image quality estimator, because most images in my training set are good. I would like to control the degree of corruption, e.g. to have control whether I produce an image with rating of 1/10 or 4/10.

romainVala commented 2 years ago

Well, controlling the Motion Artefact Severity is a difficult problem, because the exact same motion (let say a short transition of x mm) will induce very different artefact depending when this motion is occurring In the center of the kspace (ie in the middle of your motion time course) the changes will be the worst compare to the begining. Is it what you mean with controlling the time ? but I do not see how to control it, (if you have more than 2 displacements ... you want to control the time for each displacement ?. But even though the relation will not be obvious

About Intensity of motion transform, the angle and the translation are directly related to it : small motion (small translation and angle) will produce less artefacted image, (but only if you compare motion with the same timing ... so again not easy )

I am currently working on quantification of motion artefact severity, and having an estimation from the motion time course would be nice but I did not find a simple way yet ...

May be a better alternative, is to compute a difference metric (L1, L2 NCC ...) between the image before and after the motion artefact and used that metric to approximate the artefact severity

dzenanz commented 2 years ago

This answer itself is useful too. I guess my formula for converting corruption into 0-10 scale is generally OK. It might need some fine-tuning.

romainVala commented 2 years ago

I do not agree, strongest artefact appears at the kspace center, so in the middle ... may be min(time, 1-time) but what I do not understand is how do you account for the number of change (len(time)) or num_transforms in RandomMotion

and actually it is not the motion onset that is important, but the motion duration ... (so the differenc time[i+1] - time[i] ...

dzenanz commented 2 years ago

I thought I would only have one motion, instead of multiple: motion = CustomMotion(p=0.2, degrees=5.0, translation=5.0, num_transforms=1) in order to simplify my life. I guess I was misunderstanding how motion simulation works.

I think I am satisfied with how I handle artificial ghosting (ghosting = CustomGhosting(p=0.3, intensity=(0.2, 0.8)). What would be the most similar way to handle motion?

romainVala commented 2 years ago

ok, I see with only one motion (num_transforms=1) then I would still take min(time,1-time) *2 so you get the maximum in the middle

Unfortunately motion can not be made similar to other artefact ...

dzenanz commented 2 years ago

What happens if there is only 1 motion? Does it implicitly end on time=1?

So motion around t=0.5 has the greatest effect? How would the effect be quantified? For example is motion with time=[0.1, 0.2] twice less noticeable or five times less noticeable than motion with time=[0.45, 0.55] (assuming everything else being equal)? I cannot explore it well using the Slicer plugin like I can for Ghosting. Hence I ask for times to be exposed in a similar way to degrees and translation.

romainVala commented 2 years ago

1 motion means one change so 2 positions are average [0 t] and [t 1] (2 motion 3 postions [0 t1] [t1 t2] [t2 1]..)

for the Slice plugin I don't know, (but the Motion transform already have the times as argument ...)

fepegar commented 2 years ago

I coded this transform a long time ago reading Richard Shaw's paper. My version is a bit simplified, but works. I am now away at a conference, but I'll try to add some explanations to the docs when I'm back.

For now, maybe you can just use a convex combination of the original image and the transformed one:

import torch
import torchio as tio

class MyRandomMotion(tio.RandomMotion):
    def __init__(self, *, intensity, **kwargs):
        self.intensity = intensity
        super().__init__(**kwargs)

    def apply_transform(self, subject):
        transformed = super().apply_transform(subject)
        for image_name in self.get_images_dict(subject):
            original = subject[image_name]
            new = transformed[image_name]
            alpha = self.intensity
            composite_data = new.data * alpha + original.data * (1 - alpha)
            transformed[image_name].set_data(composite_data)
        return transformed

fpg = tio.datasets.FPG()
seed = 42

transform = MyRandomMotion(intensity=0)
torch.manual_seed(seed)
transform(fpg).t1.plot()

transform = MyRandomMotion(intensity=1)
torch.manual_seed(seed)
transform(fpg).t1.plot()

Figure_1

Figure_2

fepegar commented 2 years ago

If you like this approach, we can add this behavior to RandomMotion.

romainVala commented 2 years ago

@fepegar would it be possible to add the Motion transform in Slicer ? (same for the other one, not only the RamdomTransform )

dzenanz commented 2 years ago

Adding alpha-blending is a simple and effective way of controlling intensity. And its place is in the Motion transform, so the user only needs to pass the right parameter, and the right range of parameters to RandomMotion.

Adding the non-random transforms to Slicer plugin would be useful for exploring the effects of parameters.

dzenanz commented 2 years ago

Full results of my initial attempt to use ghosting and motion are now in: https://github.com/OpenImaging/miqa/issues/27#issuecomment-987292530 Quite significant! I hope I will be able to do even better with using more augmentation transforms, more formula tuning, and more general experimentation with TorchIO.

fepegar commented 2 years ago

Awesome 💯

Happy to help if needed. I'll add the intensity kwarg soon.

romainVala commented 2 years ago

I not a big fan of this intenisty kwarg because it is not realist regard to the MRI acquisition process but ok, it is easy to add, and may be it cant still be usefull ...

@fepegar would it be easy to add Motion transform in the Slicer pluggin ? (this would answer the initial need of exploration) and more generally it may be interesting for other transform too (ie not the Radom version)

About motion, @dzenanz be aware that this tranformation can also induce some misalignment with the original volume, so depending on your application it may be a problem or not ... (what is your application ?)

dzenanz commented 2 years ago

My application is image quality assessment. Or to rephrase: draw attention to images which are potentially low quality. It is used for training augmentation, so it should not be a problem.

fepegar commented 2 years ago

@fepegar would it be easy to add Motion transform in the Slicer pluggin ? (this would answer the initial need of exploration) and more generally it may be interesting for other transform too (ie not the Radom version)

It would be easy, yes. But it would take a bit of time, which I don't really have now. Feel free to open a PR!