ANTsX / ANTsR

R interface to the ANTs biomedical image processing library
https://antsx.github.io/ANTsR
Apache License 2.0
127 stars 35 forks source link

registration of binary images vs. Maurer distance maps #344

Open ptsii opened 3 years ago

ptsii commented 3 years ago

I'm using ANTsR tools to compare binary surfaces (virtual brain endocasts). I have found that using SyNRA on binary images gives good results, in the sense that the warped moving image looks very close to the target fixed image (so to speak). Here is what I did:

binaryImageA_into_binaryImageB_reg<-antsRegistration(binaryImageB, binaryImageA, typeofTransform = "SyNRA", affIterations = c(40, 30, 20, 10), regIterations = c(40, 30, 20, 10)) binaryImageA_into_binaryImageB_reg_image<-antsApplyTransforms(binaryImageB, binaryImageA, transformlist = binaryImageA_into_binaryImageB_reg$fwdtransforms, interpolator = "nearestNeighbor")

Here is what they look like plotted over each other:

plot(binaryImageB, binaryImageA_into_binaryImageB_reg_image, alpha=.9) using_binary_masks.pdf using_binary_masks

However I wondered if I might get an even closer fit if I registered Maurer distance maps of the two binary images. However, when I do this, the warped moving image looks like a significantly smaller version of the target fixed image. Here is what I did:

> binaryImageB_distanceMap<-iMath(binaryImageB,"MaurerDistance")
> binaryImageA_distanceMap<-iMath(binaryImageA,"MaurerDistance"
> binaryImageA_distanceMap_into_binaryImageB_distanceMap_reg<-antsRegistration(binaryImageB_distanceMap, binaryImageA_distanceMap, typeofTransform = "SyNRA", affIterations = c(40, 30, 20, 10), regIterations = c(40, 30, 20, 10))
> binaryImageA_distanceMap_into_binaryImageB_distanceMap_reg_image<-antsApplyTransforms(binaryImageB, binaryImageA, transformlist = binaryImageA_distanceMap_into_binaryImageB_distanceMap_reg$fwdtransforms, interpolator = "nearestNeighbor")
> plot(binaryImageB, binaryImageA_distanceMap_into_binaryImageB_distanceMap_reg_image, alpha=.9)

Here is what the plot overlay looks like: using_distance_maps_for_registration.pdf using_distance_maps_for_registration

I'm curious why this happens. Is this expected behavior? I would have thought the registration would work even better. Perhaps I'm using an inappropriate registration type?

Thanks for any thoughts.

-Tom

ntustison commented 3 years ago

I don't see anything on the pdfs.

ptsii commented 3 years ago

On my machine it takes 20 seconds or so to render, for some reason (something about R's pdf creation engine?). Anyway, here are png versions (I'll also update the original post):

registrations using binary masks: using_binary_masks

registrations using distance maps: using_distance_maps_for_registration

ptsii commented 3 years ago

I updated the post with png images - not sure why the R PDFs take so long to render.

Thanks,

-Tom

On Mar 12, 2021, at 7:05 PM, Nick Tustison @.***> wrote:

 I don't see anything on the pdfs.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

ntustison commented 3 years ago

The default syn metric is mutual information. For the scheme you're proposing, you'd have to use something like mean squares.

ptsii commented 3 years ago

No meaningful difference using typeofTransform = "SyNCC":

Using binary masks: using_binary_masks_SyNCCreg

Using distance maps of the binary masks: using_distance_maps_and_SyNCC_for_registration

?

ntustison commented 3 years ago

As expected. "SyNCC" uses neighborhood correlation and I mentioned mean squares. Neighborhood correlation suffers from the same problem as mutual information for this situation in that it does not quantify an absolute difference between the two images you're trying to align. So a perfectly aligned image patch vs. one that is slightly shifted normal to the boundary (as is the case in the top vs. the bottom) are going to basically have the same correlation values.

stnava commented 3 years ago

reproducible example:

library(ANTsR)
img1=ri(1)
img2=ri(2)
b1 = getMask( img1 )
b2 = getMask( img2 )
interiorDistance1 =  -1.0 * iMath( b1, "MaurerDistance" )
interiorDistance2 = -1.0 * iMath( b2, "MaurerDistance" )
interiorDistance1 = interiorDistance1 * thresholdImage(interiorDistance1,0,Inf)
interiorDistance2 = interiorDistance2 * thresholdImage(interiorDistance2,0,Inf)
reg = antsRegistration( interiorDistance1, interiorDistance2, 'SyNCC')
layout(matrix(1:2,nrow=1))
plot( interiorDistance1 )
plot( reg$warpedmovout )

let me know if this improves things. this would register both the image boundaries and its geometric skeleton.

ptsii commented 3 years ago

Excellent - thanks! That did improve the registration of the distance map versions, which now look very close to what I get using SyNRA on binary images: using_internal_distance_maps_and_SyNCC_for_registration

However, it isn't significantly better (at matching the surfaces) than SyNRA on binary images, and its a LOT slower.

To get an idea of the relative fit of the two methods, I created distance maps of the registered (nearest neighbor) binary images using each of the two registrations, and calculated the root mean squared values for (i.e. distances to) all the surface voxels of the fixed image:

> binaryImageA_into_binaryImageB_reg_image_distanceMap<-iMath(binaryImageA_into_binaryImageB_reg_image, "MaurerDistance")  #  distance map of original SyNRA registered binaryImageB
> binaryImageB_shellmask <- binaryImageB - iMath(binaryImageB,"ME")   #  surface voxels of binaryImageB only
> sqrt(mean((binaryImageA_into_binaryImageB_reg_image_distanceMap[binaryImageB_shellmask==1])^2))
[1] 0.4887567

Here is the calculation for the result of registering the distance maps as per stnava's suggestion above:

>  binaryImageA_into_binaryImageB_distanceMap_interior_SyNCCreg_image_distanceMap<-iMath(binaryImageA_into_binaryImageB_distanceMap_interior_SyNCCreg_image, "MaurerDistance") # distance map of result of stnava's interior distance SyNCC registered binaryImageB
> sqrt(mean((binaryImageA_into_binaryImageB_distanceMap_interior_SyNCCreg_image_distanceMap[binaryImageB_shellmask==1])^2))
[1] 0.4835682

The values are in mm, so both registrations result in surfaces that are very close - averaging less than half a mm from each other (RMSE). The interior distance map registration was a tiny bit better overall, but took a MUCH longer time (I didn't time it, but could if anyone is interested)

-Tom

stnava commented 3 years ago

If you want to know the effect on speed then you should only change the input images not the transformation model nor the similarity metric. I used syncc, you used something else.

Second we would not expect this to improve the surface registration, only the registration of the skeletons and interior structures.

This will impact things like the shape of the deformation field and related measurements like Jacobian .

ptsii commented 3 years ago

Thanks - I'm ultimately interested in the surface jacobians only, because these are analyses of fossil endocasts, and we generally don't have any direct evidence of the interiors (so we don't know what to make of them anyway - though people have tried to infer things). So getting surface registrations as close as possible is our immediate goal. My point about speed was that if a different transformation model and/or similarity metric produced very close to the same (essentially the same?) surface registration accuracy, but took a lot longer to calculate, then it isn't clear it would be worth the added computation time - again, for our purposes.

If there are thoughts about getting the surface registrations even closer, let me know (but less than .5 mm seems like its close enough for hand grenades).

-Tom

gdevenyi commented 3 years ago

Maybe if you're interested exclusively in surface deformations, you should be using a surface-based tool? I bet @r03ert0 who just published this https://www.sciencedirect.com/science/article/pii/S1053811920311708 recently could be of assistance.

r03ert0 commented 3 years ago

hello @ptsii and @gdevenyi the meshes we align all have spherical topology. Is that your case @ptsii? If yes, there's several tools you can use. In particular, BT Yeo's Spherical demons and Emma Robinson's MSM. Those tools work well if your meshes have clear geometric landmarks, well apparent if you compute mean curvature. In our case, we work with brain surfaces which can be pretty smooth (developing brains, or little folded primate brains). We have an in-house tool (should be made available in not too long), which works well on that type of surface, and we'd be happy to collaborate <3 (ping @katjaq)

ptsii commented 3 years ago

Thanks everyone, yes these would potentially be useful. Looks like Yeo's new site is: https://sites.google.com/view/yeolab ? I'll look into these. With respect to spherical topology: Yes for many of our analyses, but fossil specimens are very often incomplete, in different areas across different specimens. I'm not sure that surface-based tools are ultimately the best, if the goal is inferring internal structure (one of our goals), but my intuitions could be wrong.