Closed brussj closed 5 years ago
For this image registration scenario, you'd probably want to skull strip first. That head tilt between the fixed and moving images is going to make registration difficult.
Regardless, this call is problematic:
antsApplyTransforms -d 3
-i $movingImage -r $movingImage
-o subject_to_template_Warped.nii.gz
-n Linear
-t subject_to_template_0GenericAffine.mat
Your reference image (-r
) is your fixed image. See if changing that resolves any of the issues above.
Nick-
Thank you for the response. Yes, using the same image is intentional. I'd like the image in the fixedImage space but want to retain the grid-spacing of the original image. Further down the road I should be able to reference the fixedImage and use the identity transform to resample to that space if desired. So, some more examples, using the same datasets, either resampling straight to the fixedImage -OR- using the original data (fixedImage) resampled to it's own grid-spacing, then resampled to the fixedM :
Raw data, no padding:
antsApplyTransforms, -r $fixedIMage, using mat file from my first post:
antsApplyTransforms, -r $fixedImage, -t identity:
Applying the transform straight to the fixedImage resamples the data as expected, no cropping. However, if I wish to keep the original grid-spacing and later resample using the identity matrix, you can see that the image is cropped off at the top.
If I do the same thing but use pixel-padded data, there is no cropping issue whatsoever.
Padded data:
antsApplyTransforms, -r $fixedIMage (pixel-padded data), using mat file from my first post:
antsApplyTransforms, -r $fixedImage, -t identity (pixel-padded data):
Looking at these two in the same viewer they are nearly identical. The identity-transform file is slightly blurrier is the only difference I can see.
I realize this is a bit hard to follow and I can go into more depth on any step if you'd like. I'm trying not to give 10 different ANTs calls and 20 pictures in one post as it becomes a jumbled mess. The short story is that pixel-padded data doesn't seem to care if I resample to fixedImage now or through first resampling to native grid, then to fixedImage via identity matrix. Unpadded data will resample straight to the fixedImage but if I try to hop through the native grid spacing, then identity, the image gets cropped off at the top.
First, let's make sure something is clear. Based on the fixed/moving image information given above, this statement
Yes, using the same image is intentional. I'd like the image in the fixedImage space but want to retain the grid-spacing of the original image.
indicates to me that you have a misunderstanding of what the reference
image denotes in the context of an antsApplyTransform
call. If you want to have the same grid spacing in the output image but have it reside in the fixed image space, you should resample your fixed image to have the same voxel resolution as your moving image (using ResampleImage
) and then use that resampled image as the reference image in your antsApplyTransforms
call.
Nick-
Thank you for your prompt responses and for helping me through my problems.
I have used what you mentioned:
If you want to have the same grid spacing in the output image but have it reside in the fixed image space, you should resample your fixed image to have the same voxel resolution as your moving image (using ResampleImage) and then use that resampled image as the reference image in your antsApplyTransforms call.
and found that to work. I'm curious though, in my example #2 above (with pixel padding the input image that running that through ANTs calls), why does either condition work (pointing at movingImage with antsApplyTransforms and then later resampling pointing at fixedImage and using identity matrix vs. just straight resample to fixedImage)?
I do, most likely have a misunderstanding of what reference image is but in looking at the help section, this is all I have to go off of:
For warping input images, the reference image defines the spacing, origin, size, and direction of the output warped image.
Is there more references you have for further reading to help me better understand the terminology? In my mind I'm (input/reference set to movingImage) keeping the grid spacing of the movingImage (set to reference), based off of the help section, but I'm not actually? Is it that antsApplyTransforms, with reference, only implies grid spacing but does not denote the image space?
The domain of both your fixed and moving images are defined as cubes in 3-D space. That domain information is pulled from the scanner and put into the dicom header which is maintained in the conversion to Nifti format. Specifically, the domain is defined by its origin, and orientation (i.e., direction), length, width, and height. But since we're dealing with sampled image data, the latter three are defined in terms of the voxel spacing and number of voxels in each dimension.
Your antsRegistration
call determined the optimal transform parameters to map the moving image domain to the space of the fixed image domain (or "reference" space) based on the sampled intensity information. When looking at the help for antsApplyTransforms
, my guess is that you looked at "spacing" and skipped the remaining information--all of which defines the reference space.
It looks like your moving image domain and fixed image domain overlap somewhat (and have the same orientation) so when you mapped into the moving image domain, you cut off the top off the head (i.e., the non-overlapping part of the fixed/moving imaging domains). But that's simply a coincidence that you saw any portion of your warped moving image. Suppose, for example, that your fixed and moving image domains were defined in such a way that they didn't overlap at all. If you tried to use the moving image as the reference image with the optimized transform, you would get a blank image.
The center of each voxel in the output warped image has a coordinate in physical space defined by the image header of the reference image (whatever you pass to -r
). The transform(s) passed with -t
are applied to that point, in physical space. If you have done this right, that takes you to the correct anatomical location in the moving image.
You can upsample the reference image because all that does is change the density of points that get warped, in physical space, to the moving image for resampling. If you have a reference image at 1mm isotropic resolution and you upsample it to 0.5mm isotropic, you have a more dense point set but the anatomical alignment is preserved. In other words, if you did this:
ResampleImageBySpacing 3 fixed.nii.gz fixedUpsample.nii.gz 0.5 0.5 0.5 0 0 0
Then a physical coordinate (x,y,z) would refer to the same anatomical location in both fixed.nii.gz
and fixedUpsample.nii.gz
. You can verify this by opening the two images in separate ITK-SNAP windows and moving the cursor to some point in one of the images. The voxel coordinates will differ but the physical coordinates should be identical (within some allowable epsilon for limitations in the representation of transforms).
This condition is what allows you do registration with fixed.nii.gz
as the fixed image, but then use fixedUpsample.nii.gz
as the reference image in antsApplyTransforms
. You could go the other way by downsampling fixed.nii.gz
.
What you can't do is use a reference image that has a different definition of physical space to fixed.nii.gz
.
One other thing that might be causing confusion here:
--initial-moving-transform [$fixedImage,$movingImage,0]
This aligns the geometric center of the two images. When you pad the moving image, you are initializing your registration differently. So you might get a different result, even if you are using antsApplyTransforms
correctly.
I normally use
--initial-moving-transform [$fixedImage,$movingImage,1]
which aligns the center of mass of the two images. This can also give poor initialization if the bounding boxes are very different (because there might be a lot of non-brain tissue in one of the images), in which case a call to antsAI
is recommended to try a variety of starting points.
Thanks @cookpa .
Apologies for the late reply. I've read through both your responses and found them quite helpful. All cleared up and I'm good to go. Thanks Philip and Nick!
Apologies upfront as this isn't so much of a bug but probably, more likely, a user error. I did my best to search for a similar issue but didn't find one.
I'm trying to create a rigid registration between two images using my moving Image as reference when applying the transform but my resultant file is cropped.
fixedImage (raw):
movingImage (raw):
PrintHeader info for both images:
both raw images in snap (movingImage in red):
my ANTs calls: antsRegistration --dimensionality 3 --float 0 \ --output subject_totemplate \ --interpolation Linear \ --winsorize-image-intensities [0.005,0.995] \ --use-histogram-matching 0 \ --initial-moving-transform [$fixedImage,$movingImage,0] \ --transform Rigid[0.1] \ --metric MI[$fixedImage,$movingImage,1,32,Regular,0.25] \ --convergence [1000x500x250x100,1e-6,10] \ --shrink-factors 8x4x2x1 \ --smoothing-sigmas 3x2x1x0vox
antsApplyTransforms -d 3 \ -i $movingImage -r $movingImage \ -o subject_to_template_Warped.nii.gz \ -n Linear \ -t subject_to_template_0GenericAffine.mat
The result (movingImage in red on top of fixed image):
As you can see, they're rigidly aligned and the image appears to be translated correctly but the image is now cropped off at the top of the head. I wold have preferred more of a rotation but I'm assuming that my results are driven by the skull and I would have better results with masking or first stripping.
However, if I first apply pixel padding via ImageMath: ImageMath 3 \ ${movingBase}_25pad.nii.gz \ PadImage \ ${movingImage} \ 25
I then re-register and apply the transform as above, but now using the pixel-padded movingImage:
This is more of what I expected and looks to be rotated and translated properly. However, I'm now stuck with a padded image.
Running this output back through ImageMath, removing 25 just crops off my volume (viewed on top of padded, warped image):
The only other relevant info I think I can provide is that the movingImage was converted with the latest build of dcm2niix. The fixedImage was created, I believe, using buildtemplateparallel.sh and HCP data. I've tried both ants 2.2.1 and 2.3.1 and gotten the same result for both.
Is there a way to perform this registration, keep the movingImage dimensions, but not have to apply pixel padding?