nipy / mindboggle

Automated anatomical brain label/shape analysis software (+ website)
http://mindboggle.info
Other
145 stars 54 forks source link

Affine transformed data way off #39

Closed binarybottle closed 9 years ago

binarybottle commented 10 years ago

The apply_affine_transform() is incorrect or is applied incorrectly, since the resulting mean positions of labels/features in standard space have values in the 200s.

https://github.com/binarybottle/mindboggle/blob/master/mindboggle/utils/io_table.py#L405

The transform itself is created by ANTs ComposeMultiTransform:

https://github.com/binarybottle/mindboggle/blob/master/mindboggle/mindboggle#L434

binarybottle commented 10 years ago

I ran the ANTs function antsApplyTransformsToPoints (antsApplyTransformsToPoints() in ants.py) to compare with apply_affine_transform(), which contains the lines:

    points = np.concatenate((points, np.ones((np.shape(points)[0],1))), axis=1)
    affine_points = np.transpose(np.dot(transform, np.transpose(points)))[:,0:3]
    affine_points = [x.tolist() for x in affine_points]

and got different results:

In [10]: affine_points[0:10] Out[10]: [[196.52807152289046, 171.62127580761137, -253.48690562628502], [196.8341953327768, 171.65836551889572, -252.9375246765232], [196.40244007474638, 171.65942229872212, -252.78893039708936], [197.02474327855697, 170.9971846276284, -257.76744117447527], [198.02759655936532, 170.91783552853667, -258.50153935702957], [197.82837269785975, 170.9813103934386, -258.23987332191905], [197.29994249678404, 170.88570715115617, -258.331089507743], [196.73872051470428, 170.83385738108313, -258.33071434729027], [197.97150792995012, 170.85408660309474, -258.689055263367], [197.6206889696739, 170.82318709622814, -258.59762539711556]]

In [11]: affine_points2[0:10] Out[11]: [[195.684964283714, 172.570319694319, -252.798155200884], [195.991088093549, 172.607409405571, -252.248774251109], [195.559332835534, 172.608466185394, -252.100179971689], [196.181636039375, 171.946228514344, -257.078690749026], [197.184489320124, 171.866879415269, -257.812788931637], [196.98526545864, 171.930354280144, -257.551122896511], [196.456835257589, 171.834751037845, -257.642339082317], [195.895613275484, 171.782901267835, -257.641963921855], [197.128400690751, 171.803130489786, -258.00030483798], [196.777581730481, 171.772230982931, -257.908874971725]]

This is a concern for two reasons: (1) the coordinates are slightly different (2) the coordinates are way off from expected values

ohinds commented 10 years ago

That is disturbing. This worked fine when applied outside the context of ANTs though. Maybe there are some different assumptions about the spaces of these transforms between FS and ANTs.

I'm trying to reproduce this behavior, but I'm having a hard time getting the MB stream to finish properly, I'll send you an email about that.

binarybottle commented 10 years ago

Sorry for the trouble, Oliver -- I forgot to take the inverse transform for one of the two composed transforms, the one generated by antsCorticalThickness.sh (see https://github.com/binarybottle/mindboggle/blob/master/mindboggle/mindboggle#L444).

The coordinates seem reasonable, but I would like to have a visual confirmation of the transformed vtk in the same space as an MNI152 volume. Do you know of a good way to view both vtk and nifti files together? Paraview doesn't read in nifti files, Freeview doesn't read in vtk files. The best I've come up with today is Daniel Haehn's Slicedrop.com (see attached image) -- based on this overlay of an atlas in MNI space, it looks like the registration is off. screen shot 2014-03-05 at 1 00 54 pm

ohinds commented 10 years ago

Is this resolved? I can keep looking at it but I'm not sure what the outcome of the conversation with Nick was.

binarybottle commented 10 years ago

The individual affine transforms from subject to template and from template to MNI, as well as the composite affine transform, are all correct. I checked this by applying the three transforms to different image volumes. When I try to apply the same composite transform to the points extracted from the subject's vtk file, however, there is a translational offset between the transformed points and a volume in MNI space.

binarybottle commented 10 years ago

For the record, I had an email exchange with Nick Tustison:

Arno: I'm trying to compose two transforms to get points from a vtk file into MNI space:

ComposeMultiTransform 3 xfm.txt -i antsTemplateToSubject0GenericAffine.mat OASIS-30_Atropos_template_to_MNI152_affine.txt
antsApplyTransformsToPoints -d 3 -i points.csv -o transformed_points.csv --t [xfm.txt,0]

I thought that this would enable me to take a subject file to the OASIS Atropos template, and from this template to MNI (I know that OASIS-30_Atropos_template_to_MNI152_affine.txt works), but the resulting transformed_points.csv are off.

Nick: It’s hard to say particularly with points but here’s a recent discussion which might be relevant https://sourceforge.net/p/advants/discussion/840261/thread/3270e30a/ Also, remember that antsApplyTransforms also composes linear transforms. I’m assuming you’re using antsCorticalThickness.sh to get the transforms, correct? If so, there’s a usage issue (bug [fixed March 5th]) described in the help menu such that the transforms could be inverses of each other. If the ‘-e’ option is the same as the ‘-t’ option, then the fixed image is the subject image and the moving image is the template image (as far as the resulting transform files are concerned). If ‘-e’ does not equal ‘-t’, then the fixed image is the template image and the moving image is the subject image. I’ve been meaning to fix this but I’ve had other things to attend to. You’ll have to figure out what corresponds to your situation and adjust accordingly.

Arno: I have -e and -t set to the same OASIS Atropos template. Does this mean that before you made this fix, my transforms were coming out as subject-to-template, and now they're template-to-subject, as the name suggests? And, just to be clear, by "fixed" and "moving", if you were labeling a target brain with a source atlas, would you call the atlas "fixed" and the target "moving", because the atlas does not get resliced?

Nick: Since '-e' and '-t’ were the same before the fix, the transforms should be such that the subject is the fixed image and the template is the moving image. In your scenario, yes, the moving image is the one that gets resliced.

Arno: So in my use case, the result will be the same?

Nick: I'd have to know where everything points and that's unclear to me even with the names. However, assuming: antsTemplateToSubject0GenericAffine.mat: subject --> template (xfrm "pulls" the template into the space of the subject) and OASIS-30_Atropos_template_to_MNI152_affine.txt: template --> MNI152
so we want to compose the xfrm1 and xfrm2 to go from subject --> template --> MNI152 It's been a long time since I've used ComposeMultiTransform but I believe the syntax is

ComposeMultiTransform 3 xfrm.txt antsTemplateToSubject0GenericAffine.mat OASIS-30_Atropos_template_to_MNI152_affine.txt

(so no inverting) or, equivalently, with antsApplyTransforms:

antsApplyTransforms -d 3 -o Linear[xfrm.mat] -t antsTemplateToSubject0GenericAffine.mat -t OASIS-30_Atropos_template_to_MNI152_affine.txt
binarybottle commented 10 years ago

oliver -- i have shared a set of files that you can take a look at:

as i mentioned above, if you apply the transform to the volumes, all is well. if you try to apply it to points extracted from the vtk file, we're off. can you help me figure this out?

ohinds commented 10 years ago

Yup, I'll take a look today.

ohinds commented 10 years ago

Whatever this issue is, it is apparent in the results of both the antsApplyTransformsToPoints and apply_affine_transformation(). In the attached screenshot the original surface is in yellow. It matches the original volume (obviously). The green surface has been generated with apply_affine_transform(), and the red with antsApplyTransformsToPoints (the command line tool, not going through ants.py). They are almost identical, but not quite, which is a problem. However, the larger problem is that the 'OASIS' volume and the 'wmseg' volume transformed using antsApplyTransforms are in the same space, but way off from the surfaces. I have no idea what's going on there, but I don't think at this point that the result of apply_affine_transform() is way off. I can continue investigation of this if you like, just wanted to make sure I'm still the right person to do it.

freeview

binarybottle commented 10 years ago

Given that the same transform successfully registers the subject volume to MNI152 but misregisters the subject surface, I think it must have to do with the surface registrations not having access to the header information -- perhaps the different offsets would account for this? Delta offset to MNI152 is [10, 2, 56]:

wmparc.nii.gz_through_combined_segmentations.nii.gz qoffset_x = 80 qoffset_y = -128 qoffset_z = -128 OASIS-TRT-20_jointfusion_DKT31_CMA_labels_in_MNI152.nii.gz qoffset_x = 90 qoffset_y = -126 qoffset_z = -72

forrestbao commented 10 years ago

I feel that the offsets could be the correct place to troubleshoot. If Z axis is the axis perpendicular to transverse/axial plane, then it makes sense because the green/red surface shifts from yellow surface the most on the Z axis. And if X axis is the axis perpendicular to coronal plane, it explains the smaller image offset in Oliver's visualization as well.

satra commented 10 years ago

i wrote this to arno separately, but is there a diagram that describes these transformations. there is no reason why there should be any offset unless some transform generator is misbehaving. so the key is to find the transform generator that is misbehaving and therefore it would be good to have a map of all the transforms taking place.

binarybottle commented 10 years ago
  1. The affine transform from template to MNI152 was generated by antsRegister:

    atropos_to_MNI152_affine = 'OASIS-30_Atropos_template_to_MNI152_affine.txt'
  2. The affine transform from subject volume to OASIS template is generated by antsCorticalThickness.sh.
  3. The two are combined with Compose_affine_transform in ANTs.
  4. If I apply (3) to the original subject image volume, it is registered to MNI152.
  5. If I apply (3) to the surface coordinates, it is misregistered: 5a. Mindboggle converts the FS surface mesh to VTK format, then reads the coordinates with read_vtk(). 5b. The function read_itk_transform() reads in the transform (3). 5c. The function apply_affine_transform() either runs antsApplyTransformsToPoints, or runs:

    points = np.concatenate((points, np.ones((np.shape(points)[0],1))), axis=1)
    affine_points = np.transpose(np.dot(transform, np.transpose(points)))[:,0:3]
    affine_points = [x.tolist() for x in affine_points]
satra commented 10 years ago

thanks arno - that's very clear.

checking: are the original subject volume and surface coordinates registered?

also fyi this thread might be relevant in the future: https://github.com/stnava/ANTs/issues/78

binarybottle commented 10 years ago

Yes, they are registered -- Oliver shows this in the above image.

(Thank you for pointing me to the ANTs issue -- this would affect Mindboggle!)

ohinds commented 10 years ago

So this is something in antApplyTransforms that we don't understand. I applied the affine transform in question to the original volume using mri_convert --apply_transform and now the transformed volume and apply_affine_transform() surface line up fine (see attachment, transformed surface is green). But neither are in MNI space. I will dig into the ant stuff to figure out what's going on.

fs_reg

ohinds commented 10 years ago

Using the inverse of the transform (as suggested by Brian) results in a closer (but still clearly incorrect) match. Continuing investigation.

ants_inv

ohinds commented 10 years ago

Ok, I got to the bottom of this. Forest was right, the differences were (partially) due to the offset in the nifti files. This is a 'centered transform' in ITK, which means that the offset and index to point transformation (stored in the nifti files) need to be applied before and after applying the transform itself.

The question now is how and where to store this info so that we can warp surface vertices the same way ITK (ANTs) does. One possibility would be to write out another file with the same name (different extension) as the ITK transform file where we store the required info. Thoughts?

binarybottle commented 10 years ago

Thank you, Oliver! I could write out a file with such information, but if nibabel can extract the necessary information from a nifti file, then it seems to me that it would be better to supply that information as input to the apply_affine_transform() function that applies the transformation. How would you like to proceed?

ohinds commented 10 years ago

I think the simplest solution is the accompanying text file because then apply_affine_transform won't have to guess what nifti the surface came from, or depend on their existence after surface generation.

I'm happy to add that functionality if you like.

binarybottle commented 10 years ago

That would be great, Oliver. Thank you! On Apr 13, 2014 7:19 PM, "ohinds" notifications@github.com wrote:

I think the simplest solution is the accompanying text file because then apply_affine_transform won't have to guess what nifti the surface came from, or depend on their existence after surface generation.

I'm happy to add that functionality if you like.

— Reply to this email directly or view it on GitHubhttps://github.com/binarybottle/mindboggle/issues/39#issuecomment-40328622 .

binarybottle commented 10 years ago

Hold on -- the surface points aren't getting transformed at all in freesurfer.py's surface_to_vtk(). Norig equals Torig, so xfm is the identity matrix. Have I chosen the wrong orig_file?:

    orig_file = os.path.join(os.path.dirname(surface_file),
                         "..", "mri", "orig.mgz")

    Norig = nb.load(orig_file).get_affine()
    Torig = np.array([[-1, 0, 0, 128],
                      [0, 0, 1, -128],
                      [0, -1, 0, 128],
                      [0, 0, 0, 1]], dtype=float)
    xfm = np.dot(Norig, np.linalg.inv(Torig))
satra commented 10 years ago

as an example can you check with mri_info and check both T1.mgz and orig.mgz? since these were OASIS files, it's possible that they distributed conformed space files.

$ mri_info GAB_ISN_pilot011/mri/orig.mgz
Volume information for GAB_ISN_pilot011/mri/orig.mgz
          type: MGH
    dimensions: 256 x 256 x 256
   voxel sizes: 1.0000, 1.0000, 1.0000
          type: UCHAR (0)
           fov: 256.000
           dof: 0
        xstart: -128.0, xend: 128.0
        ystart: -128.0, yend: 128.0
        zstart: -128.0, zend: 128.0
            TR: 2530.00 msec, TE: 0.00 msec, TI: 0.00 msec, flip angle: 0.00 degrees
       nframes: 1
       PhEncDir: UNKNOWN
ras xform present
    xform info: x_r =  -1.0000, y_r =   0.0000, z_r =  -0.0000, c_r =     3.7883
              : x_a =   0.0000, y_a =   0.0000, z_a =   1.0000, c_a =    25.6142
              : x_s =   0.0000, y_s =  -1.0000, z_s =  -0.0000, c_s =   -32.3497

talairach xfm : GAB_ISN_pilot011/mri/transforms/talairach.xfm
Orientation   : LIA
Primary Slice Direction: coronal

voxel to ras transform:
               -1.0000   0.0000  -0.0000   131.7883
                0.0000   0.0000   1.0000  -102.3858
                0.0000  -1.0000  -0.0000    95.6503
                0.0000   0.0000   0.0000     1.0000
binarybottle commented 10 years ago

Below are T1.mgz, orig.mgz, and orig/001.mgz:

Lutchi:mri arno$ mri_info T1.mgz Volume information for T1.mgz type: MGH dimensions: 256 x 256 x 256 voxel sizes: 1.0000, 1.0000, 1.0000 type: UCHAR (0) fov: 256.000 dof: 0 xstart: -128.0, xend: 128.0 ystart: -128.0, yend: 128.0 zstart: -128.0, zend: 128.0 TR: 0.00 msec, TE: 0.00 msec, TI: 0.00 msec, flip angle: 0.00 degrees nframes: 1 PhEncDir: UNKNOWN ras xform present xform info: x_r = -1.0000, y_r = 0.0000, z_r = 0.0000, c_r = 0.0000 : x_a = 0.0000, y_a = 0.0000, z_a = 1.0000, c_a = 0.0000 : x_s = 0.0000, y_s = -1.0000, z_s = 0.0000, c_s = 0.0000

talairach xfm : /Applications/freesurfer/subjects/OASIS-TRT-20-1/mri/transforms/talairach.xfm Orientation : LIA Primary Slice Direction: coronal

voxel to ras transform: -1.0000 0.0000 0.0000 128.0000 0.0000 0.0000 1.0000 -128.0000 0.0000 -1.0000 0.0000 128.0000 0.0000 0.0000 0.0000 1.0000

voxel-to-ras determinant -1

ras to voxel transform: -1.0000 0.0000 0.0000 128.0000 -0.0000 -0.0000 -1.0000 128.0000 -0.0000 1.0000 -0.0000 128.0000 0.0000 0.0000 0.0000 1.0000

Lutchi:mri arno$ mri_info orig.mgz Volume information for orig.mgz type: MGH dimensions: 256 x 256 x 256 voxel sizes: 1.0000, 1.0000, 1.0000 type: UCHAR (0) fov: 256.000 dof: 0 xstart: -128.0, xend: 128.0 ystart: -128.0, yend: 128.0 zstart: -128.0, zend: 128.0 TR: 0.00 msec, TE: 0.00 msec, TI: 0.00 msec, flip angle: 0.00 degrees nframes: 1 PhEncDir: UNKNOWN ras xform present xform info: x_r = -1.0000, y_r = 0.0000, z_r = 0.0000, c_r = 0.0000 : x_a = 0.0000, y_a = 0.0000, z_a = 1.0000, c_a = 0.0000 : x_s = 0.0000, y_s = -1.0000, z_s = 0.0000, c_s = 0.0000

talairach xfm : /Applications/freesurfer/subjects/OASIS-TRT-20-1/mri/transforms/talairach.xfm Orientation : LIA Primary Slice Direction: coronal

voxel to ras transform: -1.0000 0.0000 0.0000 128.0000 0.0000 0.0000 1.0000 -128.0000 0.0000 -1.0000 0.0000 128.0000 0.0000 0.0000 0.0000 1.0000

voxel-to-ras determinant -1

ras to voxel transform: -1.0000 0.0000 0.0000 128.0000 -0.0000 -0.0000 -1.0000 128.0000 -0.0000 1.0000 -0.0000 128.0000 0.0000 0.0000 0.0000 1.0000

Lutchi:mri arno$ mri_info orig/001.mgz Volume information for orig/001.mgz type: MGH dimensions: 256 x 256 x 160 voxel sizes: 1.0000, 1.0000, 1.0000 type: SHORT (4) fov: 256.000 dof: 0 xstart: -128.0, xend: 128.0 ystart: -128.0, yend: 128.0 zstart: -80.0, zend: 80.0 TR: 0.00 msec, TE: 0.00 msec, TI: 0.00 msec, flip angle: 0.00 degrees nframes: 1 PhEncDir: UNKNOWN ras xform present xform info: x_r = 0.0000, y_r = 0.0000, z_r = -1.0000, c_r = 0.0000 : x_a = 1.0000, y_a = 0.0000, z_a = 0.0000, c_a = 0.0000 : x_s = 0.0000, y_s = 1.0000, z_s = 0.0000, c_s = 0.0000

talairach xfm : Orientation : ASL Primary Slice Direction: sagittal

voxel to ras transform: 0.0000 0.0000 -1.0000 80.0000 1.0000 0.0000 0.0000 -128.0000 0.0000 1.0000 0.0000 -128.0000 0.0000 0.0000 0.0000 1.0000

voxel-to-ras determinant -1

ras to voxel transform: -0.0000 1.0000 -0.0000 128.0000 -0.0000 -0.0000 1.0000 128.0000 -1.0000 0.0000 0.0000 80.0000 0.0000 0.0000 0.0000 1.0000

binarybottle commented 10 years ago

Satra --

When I choose a non-OASIS subject (like Twins-2-1), mri_info of T1.mgz and orig.mgz have identical vox-to-ras transforms but different than those of OASIS-TRT-20-1 and different than Torig. What do you recommend as the next step?

binarybottle commented 10 years ago

Satra and Brian had an exchange on github regarding transformation of points vs. nifti volumes using ITK transforms. Satra wrote to me: """ brian and i had an extensive discussion on github. bottom line - this is ITK's representation that's different from Nifti, so this wouldn't be something that would be done in ANTs. remember this has nothing to do with the nifti file. this only affects the points transformation. well there are two things you need to implement:

  1. flip sign of the csv file coordinates for the first two dimensions
  2. apply the inverse transforms.
  3. flip sign of the first two dimensions of the output of antsapplytransformstopoints """ Now the documentation of apply_affine_transforms() includes: """ For use with ANTs, x and y columns are multiplied by -1 before and after applying the inverse affine transform because ITK uses a different coordinate system than the NIfTI coordinate system. """

Unfortunately, it still appears that there is a translational offset (25mm too low in z direction) between the mindboggle-transformed vtk file and a brain in MNI152 space.

binarybottle commented 9 years ago

Fixed!

Shape and vertex tables now have coordinates in MNI152 space, and surfaces can be properly transformed to MNI152 space with --xfm (tested with Twins-2-1, Oasis-TRT-20-1, and Colin27-1).

The problem was that the order of the subject-to-template and template-to-mni affine transforms were switched. I have added to the documentation in mindboggle:

    #---------------------------------------------------------------------
    # For transforming surface points --
    # Make list of subject-to-MNI affine transforms
    # to use antsApplyTransformsToPoints:
    #
    # Note regarding antsApplyTransformsToPoints:
    # Points are transformed in the OPPOSITE direction of images,
    # so you pass the inverse of what is needed to warp the images.
    # Note 2: Don't forget to switch the order of the affine transforms!
    #---------------------------------------------------------------------

By contrast, I have also added a note regarding the differences in antsApplyTransforms:

    #---------------------------------------------------------------------
    # For transforming volume labels --
    # Make list of ANTs MNI152-to-subject nonlinear transforms
    # to use antsApplyTransforms:
    #
    # Note regarding antsApplyTransforms:
    # To warp the subject image to the template, one would call
    # antsApplyTransforms...-i ${subject} -r ${template}
    #                       -t ${prefix}SubjectToTemplate1Warp.nii.gz
    #                       -t ${prefix}SubjectToTemplate0GenericAffine.mat
    # To warp the template image to the subject, one would call
    # antsApplyTransforms...-i ${template} -r ${subject}
    #                       -t ${prefix}TemplateToSubject1GenericAffine.mat
    #                       -t ${prefix}TemplateToSubject0Warp.nii.gz
    #---------------------------------------------------------------------