ANTsX / ANTs

Advanced Normalization Tools (ANTs)
Apache License 2.0
1.18k stars 381 forks source link

antsRegistration: difference between providing a mask or explicit brain extraction #483

Closed lennartverhagen closed 7 years ago

lennartverhagen commented 7 years ago

Most guided/tutorials/wrappers for antsRegistration seem to suggest to use explicitly brain extracted images for the moving (and fixed) image(s). antsRegistration allows to specify a binary brain mask, both for the fixed and for the moving image. I would have expected that these approaches should lead to exactly the same behaviour, where voxels outside the mask, or with a zero intensity, are ignored when calculating the metric. Our practice suggests that there are small differences. Would you guys be able to give a bit of background on the differences under the hood? Or point me in the right direction for a reference/thread discussing this matter?

My context: I'm interested to use a mask, rather than zero-ing out the voxels to ignore. It is not vital, but would lead to a more straightforward pipeline. For example, I like to use B-Spline interpolation but would like to avoid edge/cliff/ringing artefacts, so I would generally interpolate the whole image and mask later. Also, for one problem I'm running antsRegistration iteratively, with improved brain extraction, but the same moving and fixed images, at iterative steps. It would be slightly more elegant to update just the mask. As you can see, neither of these are impossible without explicit brain extraction, but these topics did made us wonder about the under the hood nature of masks in antsRegistration.

ntustison commented 7 years ago

I would have expected that these approaches should lead to exactly the same behaviour...

They're not the same as the brain mask approach uses the similarity metric calculation only within the mask whereas using the skull-stripped version uses the entire image. The major difference is the employment of the similarity metric on the boundaries of the mask where edge features can drive the registration. That's why we recommend using the skull-stripped images.

dorianps commented 7 years ago

This is good to know. I think I was using the mask also for skull-stripped images. I guess the gain might be in time required for registration. The full skull-stripped image might require more time/memory, right?

Dorian

On Wed, Aug 16, 2017 at 11:17 AM, Nick Tustison notifications@github.com wrote:

I would have expected that these approaches should lead to exactly the same behaviour...

They're not the same as the brain mask approach uses the similarity metric calculation only within the mask whereas using the skull-stripped version uses the entire image. The major difference is the employment of the similarity metric on the boundaries of the mask where edge features can drive the registration. That's why we recommend using the skull-stripped images.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/stnava/ANTs/issues/483#issuecomment-322805824, or mute the thread https://github.com/notifications/unsubscribe-auth/AIqafeJtyrcbvl2-SYx7n0fEaOBj2qPTks5sYwgMgaJpZM4O4xUB .

ntustison commented 7 years ago

If you're simply thinking about the current stage and the number of voxels processed per similarity metric iteration then, yes, it's going to take longer. However, you'd have to take into account that previous stages are probably much faster per iteration (e.g., rigid vs. deformable) and with the skull-stripped images, you might converge to a closer starting position for the current stage from the previous stage which could translate into your overall time being faster.

ntustison commented 7 years ago

One way to limit the time and still take advantage of the edge features of the skull-stripped images is to simply dilate your brain extraction mask by a certain number of voxels and use that as your registration mask along with the skull-stripped images. That limits the number of background voxels processed while maintaining the edge features.

dorianps commented 7 years ago

Thanks, that's useful.

On Wed, Aug 16, 2017 at 12:17 PM, Nick Tustison notifications@github.com wrote:

One way to limit the time and still take advantage of the edge features of the skull-stripped images is to simply dilate your brain extraction mask by a certain number of voxels and use that as your registration mask along with the skull-stripped images. That limits the number of background voxels processed while maintaining the edge features.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/stnava/ANTs/issues/483#issuecomment-322823617, or mute the thread https://github.com/notifications/unsubscribe-auth/AIqafcliShl7Ky-Kx0b640Gk-x_bgQPjks5sYxYUgaJpZM4O4xUB .

lennartverhagen commented 7 years ago

Thanks Nick! This is very useful indeed. I now see where my assumption(s) were wrong. I naively thought that the zeros in the skull-stripped image would be interpreted as an implicit mask and be ignored for the similarity metric. I now understand that they are not. This does mean that it is important that the skull-stripping is performed for both the moving and fixed image at a similar tightness/dilation. In my iterative approach I will probably have to reconsider if I need to dilate the fixed image skull-stripping as well. Anyhow, I'm already getting good results, so perhaps I shouldn't touch it ;). Thanks again, clearly explained.

adhusch commented 7 years ago

Hi Nick!

You said:

here edge features can drive the registration. That's why we recommend using the skull-stripped images.

Don't you think that solely relying on skull-stripped images bears a risk of driving the registration by (potentially error prone) "artificial" edge features induced by the skull-stripping algorithm? I guess the very strong gradients of the edges induced by skull-stripping might strongly influence the outcome of any global registration scheme?

(I had cases in the past where a partial over-segmentation of the skull-stripping pushed ANTs to converge to rubbish, since then i only use skull-stripped images for an initial-moving transform [where skull-stripping clearly is beneficial to get meaningful cogs] and then switch two full-images for all following stages, this turned out an extremely robust strategy even on very diverse clinical data)

Cheers Andreas

ntustison commented 7 years ago

If your data processing upstream is error-prone then that's obviously going to affect current processing outcomes---"garbage in --> garbage out" but I didn't assume that situation when I read the OP or subsequent questions.

One of Arno's evaluation papers found the following:

The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and brain-only (de-skulled) images... Our primary findings are the following: 1. de-skulling aids volume registration methods;"

Arno didn't evaluate the various permutations of fixed/moving image possibilities that one can employ using antsRegistration but there certainly are more sophisticated strategies that one can employ with this tool.

spinicist commented 7 years ago

A quick thanks for this thread - it matches my experience with masking in rats/mice. Often I've found explicitly masking the brains beforehand gave better results, now I know why.