balbasty / nitorch

Neuroimaging in PyTorch
Other
83 stars 14 forks source link

Affine registration module. #21

Closed brudfors closed 3 years ago

brudfors commented 3 years ago

A module for doing affine image registration -- works on either tensors or nibabel paths. This is the signature for the entry point:

def run_affine_reg(imgs, cost_fun='nmi', basis='SE', mean_space=False,
                   samp=(4, 2), optimiser='powell', fix=0, verbose=False,
                   fov=None, device='cpu'):
    """Affinely align images.
    This function aligns N images affinely, either pairwise or groupwise,
    by non-gradient based optimisation. An affine transformation has maximum
    12 parameters that control: translation, rotation, scaling and shearing.
    It is a linear registartion.

    The transformation model is:
        M_mov\A*M_fix,
    where M_mov is the affine matrix of the source/moving image, M_fix is the affine matrix
    of the fixed/target image, and A is the affine matrix that we optimise to align voxels
    in the source and target image. The affine matrix is represented by its Lie algebra (q),
    which a vector with as many parameters as the affine transformation.

    When running the algorithm in a pair-wise setting (i.e., the cost-function takes as input two images),
    one image is set as fixed, and all other images are registered to this fixed image. When running the
    algorithm in a groupwise setting (i.e., the cost-function takes as input all images) two options are
    available:
        1. One of the input images are used as the fixed image and all others are aligned to this image.
        2. A mean-space is defined, containing all of the input images, and all images are optimised
           to aling in this mean-space.

    At the end the affine transformation is returned, together with the defintion of the fixed space, and
    the rigid basis used.
'''

What do you think of the structure @balbasty ? Maybe you can test the demo in the demo folder? Maybe some of your code could be used:

A future todo is to use Bayesian optimisation instead of scipy.powell. Then everything can stay in torch instead of doing a cast to numpy array when calling _compute_cost. And it could also be better with local minimas perhaps?

balbasty commented 3 years ago

Cool! I'll have a look later today, or tomorrow.

brudfors commented 3 years ago

I changed to using yours (nitorch.core.linalg.expm, not nitorch.core.linalg._expm). Works well, but no speed up, as expected.

They were already there.

Using yours now, and have adapted the code slightly as well.