ANTsX / ANTsPy

A fast medical imaging analysis library in Python with algorithms for registration, segmentation, and more.
https://antspyx.readthedocs.io
Apache License 2.0
633 stars 161 forks source link

Registration of 4D images, no transform is returned #47

Closed thomasbazeille closed 5 years ago

thomasbazeille commented 5 years ago

Hi, sorry for opening yet another issue but I have a use problem that I don't know how to debug. I'm trying to find a 3D registration that best match each sample of a source 4D image to the corresponding samples in a target 4D image.

My code is the following:

fixed = image_read("path_to_target_4D_image.nii.gz")
moving = image_read("path_to_source_4D_image.nii.gz")
mask = image_read("path_to_common_3D_mask.nii.gz")
reg = registration(fixed=fixed, moving=moving,
                   type_of_transform='SyNOnly', mask=mask, grad_step=0.5, flow_sigma=0, total_sigma=0, reg_iterations=(100, 40, 20), syn_metric='CC', syn_sampling=4, verbose=True)
print(reg)

This code runs, but doesn't try and calculate a registration. The print returns :

'warpedmovout' .... (same as moving)
'warpedfixout' ... (same as fixed)
'fwdtransforms' : []
'invtransforms' : []}

I had the same problem between 3D images when using 'Mattes' metric, that disappeared when using 'CC' instead but here no change of parameter seem to make the trick and I don't find any logs to understand why the registration is not calculated.

Thanks a lot,

stnava commented 5 years ago

regarding what you already tried, are you trying to do a 4D to 4D registration here? or is it a set of 3D to 3D registrations? these are very different.

if it is 3D to 3D, i would recommend following an approach like this:

https://github.com/ANTsX/ANTsPy/blob/master/tutorials/motionCorrectionExample.ipynb

otherwise, here is an example of a 4D registration:

s1 = np.random.normal(100, 10, 10*10*10*10)
s2 = np.random.normal(100, 10, 10*10*10*10)
i1 = ants.make_image( (10,10,10,10), s1 )
i2 = ants.make_image( (10,10,10,10), s2 )
areg = ants.registration(  i1, i2, 'SyN', verbose = True )
areg['warpedmovout']

you should observe how the output looks here and determine if what you've tried looks similar and ( if not ) how it differs.

the 'logs' are what you see when the verbose mode is true. you can capture the output to text if you want to read it carefully.

thomasbazeille commented 5 years ago

Hi brian, thanks a lot for your kind response. I apologize if it wasn't clear enough in my previous message.

I have two 4D images X and Y (each of them is composed of N 3D statistical maps (X[i], Y[i]) corresponding to one subject) I'm looking for one unique and common 3D registration that would register best each statistical map to its counterpart. Such that reg = registration( X[0], Y[0]) = .... = registration (X[i], Y[i]) = .... = registration(X[N], Y[N])

I know it's uncommon to do that with statistical maps but I think the same case happens often with anatomical / multimodal images and I thought that ants.registration was doing just that when applied to on 4D images but apparently I was wrong as areg['fwdtransformrs'] from your previous example returns a nii.gz of shape (10,10,10,10,4) and not one (10,10,10) reg.

The other usecase you mentionned isn't adapted as well since X[0]..X[i]...X[n] must be registered onto different targets but with the same transform.

stnava commented 5 years ago

4D registration operates in 4D.

I still am not clear on what you are trying to do here. You cant have one registrations that is optimal for all these different image pairs. What you can do is split the difference.

So a simple solution is to average all of the X, average all of the Y then compute one registration between X_avg and Y_avg.

Then apply that one unique and common 3D registration to the full time series.

The example "motion correction" shows how to split and merge images which should help in this task.

Ie split the 4D images then average them ( just image1 + image2 + ... ) / N then do the registration.

thomasbazeille commented 5 years ago

@bthirion It seems that our usecase, optimizing one registration between two subjects from multiple contrasts cannot be done with ANTs.

bthirion commented 5 years ago

Imagine a case where you get 2 imaging modalities, e.g. T1 and T2-weighted MRI. Isn't there any way to register these pairs of images with a unique deformation between 2 subjects ? Obviously, you don't want to average the input images.

stnava commented 5 years ago

one registration between two subjects with multiple contrasts is different than what you described before. is this example similar to what you want?

https://stnava.github.io/structuralFunctionalJointRegistration/

if you just want an inter-subject mapping, then one registration is sufficient but that would have to assume that the multiple contrasts are already aligned. the fundamental steps in such cases are:

stnava commented 5 years ago

this is available in a recent commit and should, along with my previous comments, handle whatever you want to do.

https://github.com/ANTsX/ANTsPy/wiki/Multi-metric-registration

bthirion commented 5 years ago

Great, thx !

stnava commented 5 years ago

a tutorial version as well:

https://github.com/ANTsX/ANTsPy/blob/master/tutorials/Multi-metric%20registration.ipynb

thomasbazeille commented 5 years ago

Thanks a lot for the new material.

Le mar. 20 nov. 2018 à 22:51, stnava notifications@github.com a écrit :

a tutorial version as well:

https://github.com/ANTsX/ANTsPy/blob/master/tutorials/Multi-metric%20registration.ipynb

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ANTsX/ANTsPy/issues/47#issuecomment-440442184, or mute the thread https://github.com/notifications/unsubscribe-auth/AGP4tGHpRoBtlKeYI1v4iV4MSsriYNrhks5uxHl0gaJpZM4YfWyb .