voxelmorph / voxelmorph

Unsupervised Learning for Image Registration
Apache License 2.0
2.26k stars 577 forks source link

How to train a multi model registration nerwork? #38

Closed ghost closed 5 years ago

ghost commented 5 years ago

Hi, I have some vascular images from fundus images and CT scans. I want to register these two types images. So how could I train a multi model network using this repo? Thanks

adalca commented 5 years ago

Hi @leilamou ,

Are you trying to register a fundus image to a CT scan, or trying to register pairs of (fundus+CT) to eachother?

Registering Fundus to CT requires a multi-modality cost function. If you have some sort of manual annotation common to both modalities (segmentations or markers) that would be easiest. Otherwise, we could provide you with some initial implementations of mutual information losses we have, but don't know how well it would work for you.

Registering (Fundus+CT) to another (Fundus+CT) is another story. You need to stack each pair and use that as input. You should be able to use our network, except make a change such that when the unet_core is called, src_feats and tgt_feats are each 2 (to represent the fact that you have two modalities as input). So, for example if you want to use the cvpr network, you could modify cvpr2018_net so that when it calls unet_core, it passes those arguments. All else should work

Hope this helps,

JianshuiChen commented 5 years ago

We have encountered some issues when using your project in our research. There are about 40 patients in our dataset, and each patient contains one OCT scan and one fundus image. Our task is to register OCT scans to fundus images and we hope to train the model by adopting all 40 patients’ data. Currently the option we take to do this task is:

The fundus image is treated as the fixed image while the OCT scan is treated as the moving image. Then, we attempt to change the "datagenetater.py" so as to load the moving iamge (OCT scan) and the fixed image (fundus iamge) simultaneously. At last, we train the model.

Due to the lack of in-depth knowledge of Tensorflow, we hope that you can point out how we can load a fixed image while loading a moving image in your code. We are also thinking about whether we need to use a new loss function to optimize our model? Can you give us some guidance on how to better solve our tasks? We would be grateful for your help!

example The left is color fundus and the right is OCT.

adalca commented 5 years ago

From the looks of it, this is it's own research project, and it won't be trivial.

I would suggest to start with voxelmorph and writing a new generator that is similar to how we load both images and segmentations in example_gen. For example, you can put your data in 40 patient folders, and in each folder have oct.nii and fundus.nii. Then in the data generator randomly pick a subject and load its oct and its fundus image, and yield that.

here's a rough idea -- this won't be final or tested, but just to get you going

def cvpr2018_octfundus(vol_folders, batch_size=1):
    """
    this returns (oct, fundus) images
    """

    while True:
        idxes = np.random.randint(len(vol_folders), size=batch_size)

        # get oct volumes
        X_data = []
        for idx in idxes:
            vol_name = os.join.path(vol_folders[idx], 'oct.nii')
            X_data.append(load_volfile(vol_name)[np.newaxis, ..., np.newaxis])
        ret = [np.concatenate(X_data, 0)]

        # get fundus volumes
        X_data = []
        for idx in idxes:
            vol_name = os.join.path(vol_folders[idx], 'fundus.nii')
            X_data.append(load_volfile(vol_name)[np.newaxis, ..., np.newaxis])
        ret.append([np.concatenate(X_data, 0)])

        # `ret` now contains [oct, fundus]
        # for CVPR method, we want to return [[oct, fundus], [fundus, zeros]]
        volshape = X_data[0].shape[1:-1]
        zero = np.zeros((batch_size, *volshape, len(volshape)))
        full_ret = [ret, [ret[0], zero]]

        yield full_ret

As I say above, though, this is its own (cool!) research project, as the images are significantly different than MRI, and the loss that you are likely to need will be different. We have some internal differentiable mutual information based losses that might work for you, which I'm happy to provide by email for now (while we're still testing them), and expect to release soon.

JianshuiChen commented 5 years ago

@adalca Thanks a lot.

norris9410 commented 5 years ago

Hi Adrian,

Thanks for the sample code. Just to confirm, the returned "zero" matrix (in [[oct, fundus], [fundus, zeros]]) is expected to be same shape/size with the moving image, correct? Seems this is how it works both in the sample code and the code in datagenerator.py.

adalca commented 5 years ago

the same size/shape as the deformation field (assuming you are using a model where the second output is the deformation field, as in the original)

norris9410 commented 5 years ago

Right, thanks for that. If the fixed image and moving image share the same shape, then the deformation field is the same size to either fixed or moving image. This is totally straight forward, may I ask what if fixed and moving image have different shapes? (Although I think one way is to resize all images to same shape like you mentioned in the paper)

adalca commented 5 years ago

They would be the same size, except that the field is 3D (at each voxel, you have a vector instead of just an intensity). so instead of batch_size x W x H x Z x 1 you have batch_size x W x H x Z x 3

Although you could envision simple modifications to voxelmorph, in its original form we required images to be the same size (and very roughly rigidly aligned).