ANTsX / ANTs

Advanced Normalization Tools (ANTs)
Apache License 2.0
1.18k stars 381 forks source link

Propagate the liver template to each subjects image so all subject's liver is represented by the same number of voxels. #1133

Closed Marjola89 closed 3 years ago

Marjola89 commented 3 years ago

Hi, I am trying to do the registration on liver 3d images. I have used antsMultivariateTemplateConstruction2.sh to produce the template using affine and BSplineSyN non-rigid registration. Now, having the template I would like to propagate the template onto each of the subject images so that they have corresponding voxels. At the moment I am using this code below using rigid registration to remove the position and orientation difference and apply transformation but the number of voxels for each subject is still slightly different. I am not looking to use ResampleImage for that as the size is the same for each image. Is there any way to obtain the same number of voxels for each subject retaining the size and shape?

#!/usr/bin/env bash
ANTSPATH="${HOME}/Software/ants_install/bin"
TEMPLATE='./T_template0.nii.gz'
inputPath='./input'
outputPath='./training_v2'
for s in "${inputPath}"/*_ip.nii.gz*; do
  subject=$(echo "$s" | awk -F/ '{ print $6 }' | awk -F. '{ print $1 }' | sed 's/_ip//')
  echo " -- ${subject}"
  "${ANTSPATH}/antsRegistrationSyN.sh" \
    -d 3 \
    -t a \
    -n 16 \
    -f "${TEMPLATE}" \
    -m "${s}" \
    -o "${outputPath}/${subject}_ip_"
    "${ANTSPATH}/antsApplyTransforms" \
    -d 3 \
    -i "/mnt/data01/liver_age/input/${subject}_fat.nii.gz" \
    -r "${TEMPLATE}" \
    -o "${outputPath}/${subject}_fat_Warped.nii.gz" \
    -n Linear \
    -t "${outputPath}/${subject}_ip_1Warp.nii.gz" \
    -t "${outputPath}/${subject}_ip_0GenericAffine.mat"
    "${ANTSPATH}/antsApplyTransforms" \
    -d 3 \
    -i "/mnt/data01/liver_age/input/${subject}_water.nii.gz" \
    -r "${TEMPLATE}" \
    -o "${outputPath}/${subject}_water_Warped.nii.gz" \
    -n Linear \
    -t "${outputPath}/${subject}_ip_1Warp.nii.gz" \
    -t "${outputPath}/${subject}_ip_0GenericAffine.mat"
done
for s in "${outputPath}"/*_ip_1Warp.nii.gz*; do
  subject=$(echo "$s" | awk -F/ '{ print $6 }' | awk -F. '{ print $1 }' | sed 's/_ip_1Warp//')
  echo " -- ${subject}"
  "${ANTSPATH}/CreateJacobianDeterminantImage" 3 \
  "${outputPath}/${subject}_ip_1Warp.nii.gz" \
  "${outputPath}/${subject}_ip_jacobian.nii.gz" \
  0 1
done 

I am also looking to convert the images to meshes like the figure attached however, the number of vertices needs to be the same. Could you please suggest any solution? Thank you :)

registered_data

cookpa commented 3 years ago

I'm not sure I understand the question.

Now, having the template I would like to propagate the template onto each of the subject images so that they have corresponding voxels.

To make the voxels correspond across subjects, you would warp them into the template space, which is what your script is doing.

At the moment I am using this code below fusing rigid registration to remove the position and orientation difference and apply transformation but the number of voxels for each subject is still slightly different.

I'm not sure what you mean by this. Do you mean the number of voxels inside a segmentation? Those are going to differ because of interpolation and residual registration errors. To make them consistent, you would need to synthesize the individual deformed images into a single consensus segmentation in the template space.

There's a variety of ways to do that, depending on how much you trust the individual segmentations in the subject space. The fastest way is just to average the probabilities from each subject, then threshold. You can also do label fusion with antsJointFusion, which takes longer but might be more accurate if the labels are accurate in the subject space.

Marjola89 commented 3 years ago

Hi @cookpa, thank you for responding so quickly. Apologies for not being very clear with this. The only step that I would like to do is the "label fusion" part in which I will create images with the same number of voxels for each subject. So, thank you for suggesting antsJointFusion. So my understanding is that it should be used as follows however, I am not sure what is used in -l (Labels):

${ANTSPATH}/antsJointFusion"\ -d 3 -t "${s}${subject}_ip_1Warp.nii.gz" -o labelFusionImage \ -g "${TEMPLATE}" \ -l ?

In my case, I only have the warped image and the template image so which one will be for label? Thank you!

cookpa commented 3 years ago

The idea is that for each subject, you have a grayscale image registered to the template, and a segmentation that goes with each subject's grayscale image. These segmentations are assumed to be highly reliable, usually by manual delineation or at least manual oversight of some automated process.

All of the grayscale images and segmentations are warped to the target space (ie, your template) first. Then you call antsJointFusion with pairs of images -g subject1GrayWarped.nii.gz -l subject1LabelsWarped.nii.gz -g subject2GrayWarped.nii.gz -l subject2LabelsWarped.nii.gz and so on. The algorithm uses the similarity of the gray images to the template to determine the best final segmentation.

cookpa commented 3 years ago

Here is an example with data

https://github.com/ntustison/MalfLabelingExample

The workflow is slightly different here, because the script in the example does the registration and then calls antsJointFusion on the warped images. Since you've already done the registration step, you can call antsJointFusion directly. Please follow the usage for that program.

Marjola89 commented 3 years ago

Thank you Philip, I am afraid I am getting a bit confused with the terminology. My images are either binary masks or intensity masks like the warped in the figure. From my understanding, in the MalfLabelingExample would I need an input for the -g atlas an image like the intensity_ip_Warped.ni.gz (which is my masked intensity image) in the figure below and for the -l atlas-segmentation/labelan image like the otsu_ip_Warped.nii.gz (which is my binary image) or the intensity_ip_jacobian.nii.gz? It seems that when I try the code it would not provide me with any output or errors. Am I doing something wrong in the code? Thank you!

${ANTSPATH}/antsJointFusion \
-d 3 \
-o ${OutputPath}Output/example \
-g ${InputPath}intensity_ip_Warped.nii.gz -l ${InputPath}otsu_ip_Warped.nii.gz \
-g ${InputPath}intensity2_ip_Warped.nii.gz -l ${InputPath}otsu2_ip_Warped.nii.gz

liver_reg

cookpa commented 3 years ago

The Jacobian isn't part of this operation. It looks like you have -g and -l correct, but you need a target image (-t) to label, ie your template. The similarity between the template and each atlas (-g) will be used to determine how to label the target.

To see error information and other output, use the verbose option

Marjola89 commented 3 years ago

Thank you, I must have missed the template pasting the code. I now run it with the template but I get this error: Running antsJointFusion for 3-dimensional images. At least 2 atlases are required.

Will that mean that I need two atlases in -g or somewhere else?

cookpa commented 3 years ago

Yes you need to specify -g and -l for each of the warped subject images you want to fuse.

Marjola89 commented 3 years ago

I am sorry but I really cannot see what am I doing different form the example you gave me. Would you possibly give me an example with antsJointFusion please?

Marjola89 commented 3 years ago

Hi, so I managed to run the code. The problem was that I was not specifying the output as -o example.nii.gz. I can see from both the example and my results that the code takes as input multiple subjects but produces one output like exampleMalfLabelsResult.nii.gz however, I would like to produce output for each subject having the same number of voxels and I cannot see how to do that with antsJointFusion. Could you suggest any solution? Thank you

cookpa commented 3 years ago

I'm afraid I don't understand conceptually what you're trying to accomplish. The registration to the template transforms the subject images into a common space where a single consensus segmentation can be defined. If you warp that segmentation back to individual subject images, the number of voxels and their placement will vary unless the subject images are identical.

Marjola89 commented 3 years ago

I am trying to do what another software (MIRTK or IRTK, see figure and here: https://www.doc.ic.ac.uk/~dr/software/usage.html) does but in ANTs. Basically, estimating a rigid transformation between two images (in this software VTK) so, both images are represented by the same number of points. Can I do something similar in ANTs?

irtk

cookpa commented 3 years ago

Basically, estimating a rigid transformation between two images (in this software VTK) so, both images are represented by the same number of points. Can I do something similar in ANTs?

Yes, though you have an affine in your original ANTs registration in the first post, not rigid. But regardless, yes you can warp points from a surface mesh to another space. This is different to warping an image, where voxel data will be resampled in each subject image. When you warp points, the number of points is conserved.

You would define a surface mesh in your template space, perhaps using your joint fusion consensus segmentation that you just computed. Then you can transform the surface mesh points directly to each subject. It requires some work (unless someone has a better solution):

  1. Extract the points from the mesh in template space. These need to be written to a CSV file with columns (x,y,z,t), where t can be 0 (its for time). These points need to be in ITK coordinates, if they are not then you will need to convert them. Often this involves flipping the sign of one of the axes (eg, y -> -y). This is tricky but one way to do it is to mesh some simple shape (eg, a square), and look at the ITK points on the surface using ITK-SNAP.

  2. Transform points with antsApplyTransformsToPoints. Note the transform ordering will be different for points vs images

https://github.com/ANTsX/ANTs/wiki/Forward-and-inverse-warps-for-warping-images,-pointsets-and-Jacobians#transforming-a-point-set

  1. Put the transformed points back into the mesh, which is now in subject space but has the same number of points. If you had to flip the coordinates in step 1, you need to flip them back here.
Marjola89 commented 3 years ago

Hi Philip, thank you so much for this explanation. i am a bit confused with this procedure to be honest. I was hoping bwhen applying the antsApplyTransformsToPoints to deform an image (the Warped image for each subject) from moving to fixed space, to be able to retain the size and shape information for further analysis. However, here I applied this code and the output I get is a csv that when I plot these two subjects they have different size and the same number of points but the shape is the same (which is the same as the template)

for s in ${InputPath}[0-1A-Z]*/; do
        "${ANTSPATH}/antsApplyTransformsToPoints" \
          -d 3 \
          -i "${TEMPLATE}liver_template.csv" \
          -o "${s}Output/liver_InMovingSpace.csv" \
          -t "${s}intensity_ip_Warped.nii.gz" \
          -t "${s}intensity_ip_0GenericAffine.mat"\
done

The steps I followed were to convert the template in ITK coordinates (x,y,z,t=0 and I did change the sign in the y axes), then I applied the antsApplyTransformsToPoints code and I plotted the csv output for each subject (see figure below). Is that what you would expect to see after applying this code? liver_applytransformtoPoints

I am a bit unsure if I did this correct but again what I was hoping to get is a plot like below where, the size and shape is different for each subject (however these ones have different number of voxels as they are the ones after registration before using antsApplyTransformsToPoints)

Warped_subjects

cookpa commented 3 years ago

for s in ${InputPath}[0-1A-Z]*/; do "${ANTSPATH}/antsApplyTransformsToPoints" \ -d 3 \ -i "${TEMPLATE}liver_template.csv" \ -o "${s}Output/liver_InMovingSpace.csv" \ -t "${s}intensity_ip_Warped.nii.gz" \ -t "${s}intensity_ip_0GenericAffine.mat"\ done

This is wrong, you are using the deformed image (_Warped) as a transform. The transforms are

-t "${outputPath}/${subject}_ip_1Warp.nii.gz" -t "${outputPath}/${subject}_ip_0GenericAffine.mat"

EDIT: transform order is correct for moving points from fixed to moving space, sorry.

As to the size, size is only preserved under a rigid transform. In your original post you mentioned using BSplineSyN, which is deformable, but your antsRegistrationSyN.sh command used an affine transform, which also allows changes in scale. You'd need to run with rigid only, -t r, to keep the size of the mesh constant.

cookpa commented 3 years ago

@Marjola89 apologies again, I think I inadvertently edited your message instead of typing my response. Here's what I said

Thank you for your quick response! I thought I would use this code from the wiki:

${ANTSPATH}antsApplyTransformsToPoints \
  -d 3 \
  -i landmarksInFixedSpace.csv \
  -o landmarksInMovingSpace.csv \
  -t movingToFixed_1Warp.nii.gz \
  -t movingToFixed_0GenericAffine.mat 

as -i is my fixedspace.csv which is liver_template.csv here.

Yes, you are correct. Sorry about that.

I rerun my registration with only rigid (-t r) and I changed the antsApplyTransformsToPoints using the code you provided. Now the size of the meshes is constant (although not perfectly aligned as they are not affined) but the shape is still the same as the template's shape. How can I retain the shape from each subject? The two subjects: applyTransformtoPoints

Template: template

As for the BSplineSyN, I used that in the antsMultivariateTemplateConstruction2.sh to cronstruct the template. I cannot see how is that affecting the shape from each subject when I apply antsApplyTransformsToPoints.

You're right, the template call shouldn't affect what is produced by your antsRegistrationSyN.sh call. But I think I'm confused because if you don't do deformable registration for each subject, you wouldn't have a warp field.

For an affine-only or rigid-only registration, your call would be

${ANTSPATH}antsApplyTransformsToPoints \
  -d 3 \
  -i landmarksInFixedSpace.csv \
  -o landmarksInMovingSpace.csv \
  -t movingToFixed_0GenericAffine.mat 

If there is a 1Warp.nii.gz, that means there was a deformable registration run for the subject.

I don't think there's a way to conform the mesh exactly to each subject's contours, while doing rigid registration. A deformable registration would allow the mesh to be closer to the subject's contours.

Marjola89 commented 3 years ago

Hi @cookpa, apologies for my late response and thank you for helping me understand this procedure. I have been looking at the suggestions you provided however, I cannot still find a solution to my problem. When I try antsApplyTransformsToPoints with the code you provided I get a .csv file that is the same as the templates' .csv file. I want to warp/propagate the average template surface mesh to each subject space by employing the inverse of the registration field from subject segmentation to template segmentation but even when I try with antsApplyTransforms (following the procedure in https://github.com/ANTsX/ANTs/wiki/Forward-and-inverse-warps-for-warping-images,-pointsets-and-Jacobians#transforming-a-point-set) I get a fixedTomovingDeformed image that is the same as the liver_intensity_template but I want it to be similar to the liver_intensity_moving instead (see figure, note here instensit_Warped is the image produced from antsRegistrationSyN.sh). applyTransforms

I have been looking at some of your papers where you use ANTs for SyN (https://pubmed.ncbi.nlm.nih.gov/17659998/) and similarity metrics (https://pubmed.ncbi.nlm.nih.gov/20851191/) and I can see that you mention 5. Warp the template labels, with nearest neighbor interpolation, to each individual and evaluate overlap measures with respect to ground truth for both affine and the combined affine and diffeomorphic maps. I am not sure how you do that exactly and what is the outcome you get? Are you getting an image that is similar to the template or similar to the subject? I am really not sure what exactly am I doing wrong here. Could you please suggest any solution? Thank you

cookpa commented 3 years ago

Can you share an example? One template, one subject, and a runnable script that does the registration / transforms, like in your first post?

Marjola89 commented 3 years ago

sure, so my subject is liver_intensity_subjects.nii.gz and my template is liver_intensity_template.nii.gz and iver_intensity_template.txt in the xyzt system (I uploaded as .txt as it wouldn't let me upload it in .csv) liver_intensity_subject.nii.gz liver_intensity_template.nii.gz liver_intensity_template.txt

and this is the code I use:

for f in ${InputPath}/[0-7A-Z]*/
do
    fileIn=$(echo "$f" | awk -F/ '{ print $7 }' )
    echo " -- ${fileIn}---"
    "${ANTSPATH}"/antsRegistrationSyN.sh \
    -d 3 \
    -f ${TEMPLATE}liver_intensity_template.nii.gz \
    -m ${f}liver_intensity_subject.nii.gz \
    -o ${f}intensity_ \
    -t a

    # Transform from template to target
    "${ANTSPATH}/antsApplyTransforms" \
    -d 3 \
    -i "${TEMPLATE}liver_intensity_template.nii.gz" \
    -r "${f}liver_intensity_subject.nii.gz" \
    -o "${f}liver_fixedTomovingDeformed.nii.gz" \
    -t ["${f}intensity_0GenericAffine.mat",1] \
    -t "${f}intensity_InverseWarped.nii.gz" \
    -n NearestNeighbor \

   # Transform image to points
    "${ANTSPATH}/antsApplyTransformsToPoints" \
    -d 3 \
    -i ${TEMPLATE}liver_intensity_template.csv \
    -o ${f}liver_fixedTomovingDeformed.csv \
    -t ["${f}intensity_0GenericAffine.mat",1] \
    -t "${f}intensity_InverseWarped.nii.gz" \

done

You have suggested that for -t a it would only produce GenericAffine.mat and I wouldn't want to use the Warped.nii.gz and InverseWarped.nii.gz for antsApplyTransforms but I produce them too and that is why I used them above. I have tried changing it to -t s but I get the same issue and besides it is not what I want for my project.

cookpa commented 3 years ago

Thanks. I realize here I've got the transforms order wrong, so I'm going to start over

cookpa commented 3 years ago

I ran

${ANTSPATH}antsRegistrationSyN.sh \
    -d 3 \
    -f liver_intensity_template.nii.gz \
    -m liver_intensity_subject.nii.gz \
    -o intensity_ \
    -t a

This produces one transform intensity_0GenericAffine.mat. It also produces deformed moving and fixed images intensity_Warped.nii.gz (moving image in fixed space) and intensity_InverseWarped.nii.gz (fixed image in moving space). The "Warped.nii.gz" are not warp fields, I think this is why subsequent commands aren't working for you. Warp fields are vector-valued and typically end with 1Warp.nii.gz or 1InverseWarp.nii.gz, and are only produced for nonlinear registrations.

image

The alignment looks OK for an affine. It can only represent global changes, so the correspondence won't be exact. To apply the transform to the points, I converted the text file to csv then ran

${ANTSPATH}antsApplyTransformsToPoints \
    -d 3 \
    -i liver_intensity_template.csv \
    -o liver_fixedToMovingDeformedPoints.csv \
    -t  intensity_0GenericAffine.mat

The same transform that warps the moving image to fixed space moves points from the fixed to moving space. If I wanted to recreate intensity_Warped.nii.gz, I would do

${ANTSPATH}antsApplyTransforms \
    -d 3 \
    -r liver_intensity_template.nii.gz \
    -i liver_intensity_subject.nii.gz \
    -o intensity_Warped_Again.nii.gz \
    -t  intensity_0GenericAffine.mat

To get a more precise alignment, you can change -t a in the registration call to something else, like -t s. Then your points transform would look like

${ANTSPATH}antsApplyTransformsToPoints \
    -d 3 \
    -i liver_intensity_template.csv \
    -o liver_fixedToMovingDeformedPoints.csv \
    -t intensity_1Warp.nii.gz \
    -t intensity_0GenericAffine.mat
Marjola89 commented 3 years ago

Hi @cookpa, thank you so much for these suggestions! I have tried both registration transformation types -t a and -t s and used the codes above to apply the transform to points for each registration type. My output for liver_fixedToMovingDeformedPoints.csv for both procedures provides me the figure below, where the red is the liver_fixedToMovingDeformedPoints.csv and the blue is the liver_intensity_template.csv. As you can see they are not aligned and from my understanding that may be due to intensity_0GenericAffine.mat and because it is transformed to the subjects' space. with_0GenericAffine_mat

Is there any way to align these meshes? I found that by excluding intensity_0GenericAffine.mat from the code (after applying the registration transformation type -t s ) and doing the following, I get a good alignment (see figure below):

${ANTSPATH}antsApplyTransformsToPoints \
    -d 3 \
    -i liver_intensity_template.csv \
    -o liver_fixedToMovingDeformedPoints.csv \
    -t intensity_1Warp.nii.gz 

without_0GenericAffine_mat

However, I am not sure if that is the correct way to do it. Also, when I apply the registration transformation type -t s, the process takes a lot more time to produce results, even though I have tried to run it in parallel. Is there any way to run this faster? Thank you so much for your help!

cookpa commented 3 years ago

That's right, the points are being transformed to the subject physical space in the first picture. By omitting the affine part, you are applying only the nonlinear part of the transform. Rotation, translation, and global scaling / shearing are not represented.

The SyN registration does take a lot longer than affine. You can speed it up with multiple cores. You can also see if antsRegistrationSyNQuick.sh is good enough for your requirements.

There's other tricks to speed things up, possibly at the cost of performance, but they require editing the antsRegistration command directly. The command printed by the antsRegistrationSyN*.sh scripts are a starting point that can be altered to speed things up.

Marjola89 commented 3 years ago

That sounds great! Thank you for all your help!

Marjola89 commented 3 years ago

Hi @cookpa, I am sorry to bother you with this again however I have noticed that when I apply the antsApplyTransformsToPoints to transform the template to the subject's space, some regions in the transformed liver (in red) show no difference with the template (in blue) (see figures below). subject1: sub1 subject2: sub2

You can see that when I compute the signed distances between the template and the transformed mesh there is 0 distance in that region.

signed_distances_error

I am using rigid+BSplineSyN registration process now, however I have tried with all the possible transform types :

"${ANTSPATH}"/antsRegistrationSyN.sh \
    -d 3 \
    -f liver_intensity_template.nii.gz \
    -m liver_intensity_subject.nii.gz \
    -o intensity_ \
    -t br

and

${ANTSPATH}antsApplyTransformsToPoints \
    -d 3 \
    -i liver_intensity_template.csv \
    -o liver_fixedToMovingDeformedPoints.csv \
    -t intensity_1Warp.nii.gz 

Is there any way I can fix this problem? I would like to warp the template to the same anatomical positions in each subject’s mesh to obtain co-registered vertices in a standard coordinate space, and consequently, a liver mesh encoding their variation. I would really appreciate your help! Thank you!

cookpa commented 3 years ago

These results have to be evaluated alongside the image registration. It's possible that some parts of the surface will have small deformation because the rigid alignment might align some of the edges of the two objects together.

You can view the warp field as an overlay in ITK-SNAP to assess how much deformation has happened in different parts of the image.

If the mesh is not deformed correctly it is likely due to a coordinate mismatch between the VTK mesh and ITK space. This can be wrong orientation (I think you dealt with this already) or origin.

Marjola89 commented 3 years ago

Ok, I see now. Thank you for your recommendations!

borysd commented 3 years ago

Hi, I was following this discussion and I think that we have a similar problem. Unfortunately, the solution given here did not help in my situation. I have 2 images and .stl file that was produced from the mask of the structure in one of the images (moving). I wanted also to perform a deformation of coordinates (using antsApplyTransformsToPoints) in this .stl file using the deformation field that is produced from registration with ANTs (using SyN deformation).

Working on your data (the liver) that you have shared - it works perfectly. But when I use the same approach and scripts for our data I can modify the coordinates using the affine matrix but adding the Warp do nothing. The Warp deformation field contains a reasonable (and non-zero) values. Using Warp (or InverseWarp) alone also do not modify at all the coordinates - any help? Actually I'm using the same scripts that were suggested here in the discussion (antsRegistrationSyN.sh with -t s option) .

Let me share both images and .csv (generated from .stl file). fixed: https://www.ziemowit.hpc.polsl.pl/owncloud/index.php/s/MeT4nsoSCrxCRDZ moving: https://www.ziemowit.hpc.polsl.pl/owncloud/index.php/s/CGxcFp6D53Wq9ow csv: https://www.ziemowit.hpc.polsl.pl/owncloud/index.php/s/jTPfr4mWZaLDaxH

Thank you, Damian

Marjola89 commented 3 years ago

Hi @borysd, I can see that the fixed and moving images are the scan images and not organ segmentation. The .csv file looks ok apart from the last 2 columns named "label" and "comment". You wouldn't need these columns for the antsApplyTransformsToPoints. The organ in the .csv seems that is derived from a segmentation but your input to antsRegistrationSyN.shis the full scanning images so, you would ideally need as a moving image input a subject segmentation nifti image and as a fixed image a template produced from image segmentations. Hope that helps!