biomedia-mira / drop2

drop2 - intensity-based image registration
Other
36 stars 3 forks source link

warping landmarks using the output deformation field #2

Closed Borda closed 4 years ago

Borda commented 4 years ago

I would continue our discussion here as I think that it may be useful also for other users or your SW. To the context, we are using DROP2 as one of the STOA methods in BIRL for image registration participating ANHIR challenge.

I will share with you two pair of images:

  1. overlap of target and warped moving image
  2. visualization of registration error, in this case, target landmarks, warped target landmarks (target landmarks plus the shift extracted from deformation) and moving landmarks; the expectation is that the warped landmarks should match the warped target landmarks (the red line represents the registration error)

Some more details and assumptions:

For the simple cases with translation or affine transformation it looks fine, just a tiny error image_refence-warped registration_visual_landmarks

but for real images, the registration seems to be fine 1] (see the alignment of the holes in the top left and bottom right corners) is quite fine, but 2] shows a quite large error image_refence-warped registration_visual_landmarks

Can the deformation be affected by different image sizes?

bglocker commented 4 years ago

I think the problem is that you cannot really 'transfer' the displacements that are defined on the target image to the source image. In order to get a correct visualisation of displacement vectors of source landmarks visualised on the source image, you will need to swap the role of the images during registration (use your current source as target, and vice versa). Can you recreate the last image for this?

Borda commented 4 years ago

Ok, I will do it. Is the registration deterministic and symmetric meaning if source -> target reached optima then target -> source should too?

bglocker commented 4 years ago

In ideal (continuous) world they would be symmetric, but they won't because of the discrete nature of the setting. The reference space is different for the two images, and the inv(T_src_to_trg) != T_trg_to_src. Because of the way FFDs work (uniform grid overlaid on the reference target frame), in fact the deformations can be significantly different from A->B and B->A.

Having said this, there are specific methods incorporating 'inverse consistency' going back to early work by Christensen & Johnson. Consistent image registration. IEEE TMI 2001 Jul;20(7):568-82.

Borda commented 4 years ago

just a quick question:

Btw, would you consider to rename the options, until a user sees the help, the n... intuitively leads to a number then non-linear so to rather have out_... (like out_noimage), linear_... (like linear_sampling), etc

bglocker commented 4 years ago

If you run linear registration without --ocompose you don't get a transformation field, that's correct, as the displacement field without composing the affine transformation would just be zeros. So if you want the full displacement field for the affine transformation you need to add --ocompose.

If non-linear registration is enabled, you will get the full field (including the affine transformation) when adding --ocompose. If you run non-linear registration without that option, you get only the displacement field of the non-linear part of the registration (this often preferred where an analysis on the non-linear deformation is required without the affine part).

I see your point about renaming the options. I will consider it.

Borda commented 4 years ago

I have been playing with the landmarks waring and still falling into the same trap, that for some images it is firn (the visualisation correlate with the expectation), but some are bad... Typically the synthetic images are fine, the real is failing... I found some "inconsistency" in the image warping/deformation field (or there is some mistake in my thinking). I have created a simple script which should do the image warping according to the generated deformation field and compare it with the warped image.

import os

import numpy as np
import nibabel
import matplotlib.pylab as plt
import skimage.color as sk_color
from scipy.interpolate import griddata

REAL = True

def compose_images(img1, img2):
    dims = np.max([img1.shape, img2.shape], axis=0)
    img = np.zeros(tuple(dims[:2]) + (3, ))
    for i, img_ in enumerate([img1, img2]):
        img[:img_.shape[0], :img_.shape[1], i] = img_[:img_.shape[0], :img_.shape[1], ...]
    return img

if REAL:
    PATH_SOURCE = os.path.expanduser('~/Dropbox/Workspace/BIRL/data_images/lesions_/scale-5pc')
    PATH_IMG_SRC = os.path.join(PATH_SOURCE, 'Izd2-29-041-w35_HE.jpg')
    PATH_IMG_REF = os.path.join(PATH_SOURCE, 'Izd2-29-041-w35_proSPC.jpg')
    PATH_REGIST = os.path.expanduser('~/Desktop/BmDROP2_20191027-180923/4')
else:
    PATH_SOURCE = os.path.expanduser('~/Dropbox/Workspace/BIRL/data_images/images')
    PATH_IMG_SRC = os.path.join(PATH_SOURCE, 'artificial_reference.jpg')
    # PATH_IMG_REF = os.path.join(PATH_SOURCE, 'artificial_moving-affine.jpg')
    PATH_IMG_REF = os.path.join(PATH_SOURCE, 'artificial_moving-elastic.jpg')
    PATH_REGIST = os.path.expanduser('~/Desktop/BmDROP2_20191027-180923/2')

PATH_IMG_WARP = os.path.join(PATH_REGIST, 'output.jpeg')
PATH_DEF_X = os.path.join(PATH_REGIST, 'output_field_x.nii.gz')
PATH_DEF_Y = os.path.join(PATH_REGIST, 'output_field_y.nii.gz')

img_src = sk_color.rgb2grey(plt.imread(PATH_IMG_SRC))
img_ref = sk_color.rgb2grey(plt.imread(PATH_IMG_REF))
img_warp = plt.imread(PATH_IMG_WARP) / 255.

fig, axarr = plt.subplots(nrows=2, ncols=2, figsize=(12, 12))
axarr[0, 0].set_title("Overlap: target & source image")
axarr[0, 0].imshow(compose_images(img_ref, img_src))
axarr[0, 1].set_title("Overlap: target & warped (DROP) image")
axarr[0, 1].imshow(compose_images(img_ref, img_warp))

h, w = img_src.shape[:2]
deform_x = nibabel.load(PATH_DEF_X).get_data()[:w, :h, 0].T
deform_y = nibabel.load(PATH_DEF_Y).get_data()[:w, :h, 0].T
grid_y, grid_x = np.mgrid[0:deform_x.shape[0], 0:deform_x.shape[1]]

img_warp2 = griddata(((grid_x - deform_x).ravel(), (grid_y - deform_y).ravel()), img_src.ravel(),
                     (grid_x, grid_y), method='linear')
axarr[1, 0].set_title("Overlap: target & warped (local) image")
axarr[1, 0].imshow(compose_images(img_ref, img_warp2))
axarr[1, 1].set_title("Overlap: DROP & local warped image")
axarr[1, 1].imshow(compose_images(img_warp, img_warp2))

fig.tight_layout()
plt.show()

The synthetic example seems to be well aligned a1

But the real image is a very different story a2

PS: the used registration images are here and configuration:

-l --ltype 0 --lsim 1 --llevels 1 1 1 --lsampling 0.5 --ocompose
bglocker commented 4 years ago

From a quick look at this, I think the x and y displacements are possibly being swapped in the process somewhere. Need to have a more detailed look to find out where. I suspect that the transpose in nibabel.load(PATH_DEF_X).get_data()[:w, :h, 0].T has to do with it.

The DROP warped image and your own warped image should be really identical. But I can see even differences for the synthetic images.

Borda commented 4 years ago

The transpose is there because the JPEG image has shape 800x600 but the deformation is 600x800. We have discussed this 90degree rotation and then you added JPEG/PNG as supported output format... So you say that there is something in DROP or in my image warping (It took me some time thinking what I am doing wrong that the warped images does not match)

bglocker commented 4 years ago

Got it. The real images have a pixel spacing of ~0.3 and the displacements are stored in the physical units of the images and need to be rescaled to map them to pixels.

There were a few other issues in the way griddata works. The first argument points is the original coordinates, and the argument xi is the displaced points.

If you replace the last bit of your test code with the following, it should work. Note, I'm using SimpleITK to read the .nii.gz but I think you can do similar with nibabel.

img_src_resized = resize(img_src, img_ref.shape)
img_dx = sitk.ReadImage(PATH_DEF_X)
img_dy = sitk.ReadImage(PATH_DEF_Y)
spacing = img_dx.GetSpacing()
deform_x = sitk.GetArrayFromImage(img_dx)[0] / spacing[0]
deform_y = sitk.GetArrayFromImage(img_dy)[0] / spacing[1]
grid_y, grid_x = np.mgrid[0:deform_x.shape[0], 0:deform_x.shape[1]]

img_warp2 = griddata((grid_y.ravel(), grid_x.ravel()), img_src_resized.ravel(),
                     (grid_y + deform_y, grid_x + deform_x), method='linear')
axarr[1, 0].set_title("Overlap: target & warped (local) image")
axarr[1, 0].imshow(compose_images(img_ref, img_warp2))
axarr[1, 1].set_title("Overlap: DROP & local warped image")
axarr[1, 1].imshow(compose_images(img_warp, img_warp2))

fig.tight_layout()
plt.show()
Borda commented 4 years ago

cool, that seems to help... :) registration_visual_landmarks

asmagen commented 4 years ago

Hello @Borda & @bglocker This issue and transformations seem very relevant to what I am trying to do - given the gray scale output from drop2, apply the transformation on the original RGB image to get the transformed version in RGB. What do I need to use from the discussion above to do that? These are the relevant stains I (CD20 aligned to CD3 output in CD20_to_CD3_test)

Screen Shot 2020-03-27 at 12 14 45 PM
Borda commented 4 years ago

I would try to run BIRL demo for DROP2

asmagen commented 4 years ago

@bglocker I was wondering how long does it take for img_warp2 = griddata((grid_y.ravel(), grid_x.ravel()), img_src_resized.ravel(),(grid_y + deform_y, grid_x + deform_x), method='linear') as used above to execute? I am running it on a relatively small image (4302x2846 pixels) and it didn't finish executing for at least 10 minutes by now. My target image size is 5-10 times as big -- would it scale up or is there an alternative? Also, I have the CD20_to_CD3_test_field_z.nii.gz file although I used mode2d in the registration. Is it okay to ignore this file in reproducing the above on my own set of images?

Thanks