ANTsX / ANTs

Advanced Normalization Tools (ANTs)
Apache License 2.0
1.19k stars 380 forks source link

Distance of a point x in the moving image to a point y in the fixed image #410

Open 00tau opened 7 years ago

00tau commented 7 years ago

Hi,

I have a comprehension question about the 0GenericAffine.mat and the 1Warp.nii.gz file after running:

# define image 16 as the reference scan
fixed=../../data/0437-FM/0437-FM-016.nii
moving=../../data/0437-FM/0437-FM-003.nii

# doing a diffeomorphic warp
op=diffeo
antsRegistrationSyN.sh -d 3 -j 4 -f $fixed -m $moving -o $op

For an arbitrary point x in the moving image, and a different arbitrary point y in the fixed image, I would like to calculate the distance of x to y, after x has been mapped to the fixed image.

In particular, I wonder if I have understood the "Affine transformation file" on page 6, and the explanations of the file 1Warp.nii.gz on page 4 of the manual

https://github.com/stnava/ANTsDoc/raw/master/ants2.pdf

correctly.

Sorry for bothering you with some Python code. It is just one line, which I am not sure whether I have understood the manual correctly.

Thank you in advance.

import numpy as np
import scipy.io as scio
import nibabel as ni

Here is the actual code of interest: For completeness, the two helper functions apply_affine and generic2affine are attached at the bottom.

# Images
imFixed  = ni.load('../../data/0437-FM/0437-FM-016.nii')
imMoving = ni.load('../../data/0437-FM/0437-FM-001.nii')

# Affine transformations form index to world coordinates
world_fixed = imFixed.affine
world_moving = imMoving.affine

# Deformation field
warp = ni.load('diffeo1Warp.nii.gz').get_data()
generic = scio.loadmat('diffeo0GenericAffine.mat')
affine = generic2affine(generic['AffineTransform_double_3_3'])

# I guess `generic['fixed']` is a fixed point for ... which rotation?
generic['fixed']
affine[:3,:3].dot(generic['fixed']) # no
inv(affine[:3,:3]).dot(generic['fixed']) # no

# Position of a point y with index `(i,j,k)` in the fixed image:
i,j,k = (16,16,16)
py = apply_affine(world_fixed, np.array((i,j,k)))
py

# Position `px` of a point x with index `(i,j,k)` in the moving image
# with respect to coordinates in the fixed image:
#  `px = A(x) + u(x) = affine(world_moving((i,j,k))) + warp((i,j,k))`

My question concerns the following line: does warp[i,j,k] contain the deformation u(x) if x = world_moving((i,j,k))?

px = apply_affine(affine.dot(world_moving), np.array((i,j,k))) + warp[i,j,k,0]
px

# If calculation of `px` is correct, then distance between `px` and `py`
# is

np.linalg.norm(py-px)

Where apply_affine and generic2affine are the functions:

def generic2affine (mat):
    """
    This will turn a vector
        array([[ 0],
               [ 1],
               [ 2],
               [ 3],
               [ 4],
               [ 5],
               [ 6],
               [ 7],
               [ 8],
               [ 9],
               [10],
               [11]])
    into
        array([[ 0,  1,  2,  9],
               [ 3,  4,  5, 10],
               [ 6,  7,  8, 11],
               [ 0,  0,  0,  1]])
    Following page 6 of the manual, this should be correct. Is it?
    """
    return(np.vstack((np.hstack((mat[:-3].reshape(3,-1), mat[-3:])), (0,0,0,1))))

def apply_affine (affine, vector):
    """
    Apply an affine transformation given in homologous coordinates to a
    vector or a matrix of column vectors.
    """
    vec = vector
    if (len(vector.shape) == 1):
        vec = vector.reshape(3,-1)
    if (len(vector.shape) > 2):
        vec = vector.reshape(3,-1)
    vec = np.vstack((vec, np.ones(vec.shape[1])))
    return(affine.dot(vec)[:3].reshape(vector.shape))
ntustison commented 7 years ago
<_removing incorrect response to avoid confusion_>
stnava commented 7 years ago

is the order correct? seems like should be

1. 1Warp( y ) ---> y'
2. 0GenericAffine.mat( y ) ---> y''

see fig 8 and 9 in ants2.pdf

ntustison commented 7 years ago

Sorry, you're the expert, Brian, so I defer to you. But are you referring to the order as specified in antsApplyTransforms or the order in which the operations are applied?

stnava commented 7 years ago

i'm thinking of both ,,, eg to take a point in an image from fixed to moving physical space then it does:

x_world = point in fixed image warp( x_world ) => x_new affine( x_new ) => x_in_moving

anyway the point is to compose the operations

ntustison commented 7 years ago

Oh, dumb of me, you're right---the registration operation is in that direction.

00tau commented 7 years ago

The 1Warp and 1InverseWarp are images, and they map an index (i,j,k) to some vector. If I understand you correctly, these are

1Warp : index in fixed => R^3
        (i,j,k) -> y_ijk

1InverseWarp : index in moving => R^3
             (i,j,k) -> x_ijk

Q1: Is that correct?

I also have the maps

world_fixed : index in fixed => coordinate in fixed
world_moving : index in moving => coordinate in moving
0GenericAffine.mat: coordinate in fixed => coordinate in moving
0GenericAffine.mat^-1 : coordinate in moving => coordinate in fixed

If I understand you correctly, to get the coordinates with respect to the moving image of a point y in the fixed image, which is given to us by its index (i,j,k), we do

0GenericAffine.mat( world_fixed(i,j,k) + 1Warp(i,j,k) )

Q2: Is this correct?

On the other hand, to get the coordinates with respect to the fixed image of the point x in the moving image, which is given to us by its index (i,j,k), we do

0GenericAffine.mat^-1 ( world_moving(i,j,k) ) + 1InverseWarp(i,j,k)

or ... you see, this is where I am not sure. It could also be

0GenericAffine.mat^-1 ( world_moving(i,j,k) + 1InverseWarp(i,j,k) )

Q3: Which is correct?

The way in which I understood the text is

0GenericAffine.mat( world_moving(i,j,k) ) + 1Warp(i,j,k)

which is, well, non of the above.

stnava commented 7 years ago

registration is very much about getting details right and being very specific. so i find it hard to answer the questions above except to say that none of the short hand looks correct to me but perhaps i am misinterpreting your meaning. one point of confusion is that you are using index notation so i'm not sure what ijk means etc wrt physical space which is what we use in itk/ants to define registration operations.

in both fwd and inverse maps, you need to compose, not add.

fwd is

y = A( W( x ) )

inv is

x = InvW( InvA( y ) )

if you are doing this correctly, of course.

i would recommend testing against the chicken example

https://github.com/stnava/chicken

00tau commented 7 years ago

Dear Brain, thank you for your reply, it is greatly appreciated.

Of course, we think in world coordinates when doing registrations. As you are writing, I want to compose the inverse warp with the inverse of the affine, namely InvW( InvA (y) ), thus, going from the moving to the fixed image. It is

InvA (y) = 0GenericAffine.mat^-1 ( y )

As an end-user, I have access to the function InvW through the displacement field in the file 1Warp.nii.gz, which you define on page 6 of your manual: "The values of each pixel are the displacements in physical space. So, y = u(x) + x where u(x) is the displacement at x." So,

InvW ( InvA(y) ) = 1InverseWarp(InvA(y)) + InvA(y)

and this is why I have "added" the values of 1Warp and 1InverseWarp to the results of 0GenericAffine.mat and 0GenericAffine.mat^-1 respectively: Please correct me, if I am wrong about this!

The displacement field 1InverseWarp.nii.gz is really only available to me on some fixed, finite grid: it is an image file of shape, say (64,64,32). So, it would be literally impossible for me to evaluate 1InverseWarp at an arbitrary point InvA(y), but only at points which lie on this grid. My question really boils down to whether

InvW ( InvA(y) ) = 1InverseWarp[ index of y in moving ] + InvA(y)

if I know the index of y in the moving image.

I am sorry, when the phrasing of my original question wasn't as clear as intended. As you said, it is all about the little details.

stnava commented 7 years ago

in ITK, we interpolate transforms s.t. we can always use physical space. even displacement fields.

x = InvW( InvA( y ) )

is correct. if you write this out in detail, it would be something like:

x = InvW( InvAMatrix y + Invt ) x = InvW( InvAMatrix y + Invt ) + ( InvAMatrix * y + Invt )

so you need to interpolate InvW at the physical coordinate. invt is the translation.

00tau commented 7 years ago

Ok, I understand.

Is there a more generic way of getting all the "new" world coordinates of all pixels/voxels of the moving image?

dorianps commented 7 years ago

Not sure if this helps, but I have used antsApplyTransformsToPoints in ANTsR. It takes a matrix Nx3 where N is the number of voxels, and gives the new coordinates. It works quite fast. I think it relies on the same tool available in ANTs.

Dorian

On Tue, Feb 14, 2017 at 10:11 AM, Thomas W. D. Möbius < notifications@github.com> wrote:

Ok, I understand.

Is there a more generic way of getting all the "new" world coordinates of all pixels/voxels of the moving image?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/stnava/ANTs/issues/410#issuecomment-279733923, or mute the thread https://github.com/notifications/unsubscribe-auth/AIqafeMkPuRI0pAEFnkBRWtEzT0F3x4Cks5rccQ-gaJpZM4L8QLV .