DeepRegNet / DeepReg

Medical image registration using deep learning
Apache License 2.0
568 stars 77 forks source link

Add free-form deformation model based on B-splines #372

Closed YipengHu closed 3 years ago

YipengHu commented 4 years ago

Subject of the feature

Adding free-form deformation model based on B-splines

Rueckert, D., Sonoda, L.I., Hayes, C., Hill, D.L., Leach, M.O. and Hawkes, D.J., 1999. Nonrigid registration using free-form deformations: application to breast MR images. IEEE transactions on medical imaging, 18(8), pp.712-721.

If the feature request is approved, would you be willing to submit a PR? (Help can be provided if you need assistance submitting a PR)

Yes

acasamitjana commented 4 years ago

That's what I thought. I've never implemented splines so probably I've missed several tricks but it's a starting point.

Approach:

Implementation:

Bibliography: B-splines formula: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=796284 Worth checking for accelerated implementation: https://arxiv.org/abs/2004.05962

YipengHu commented 4 years ago
  • Use low-resolution outputs from the registration network and interpolate to full size (X,Y,Z) using splines.

The registration network will output a low-res DDF which is specified by control point spacing; Now you want interpolate to get those values in between, after "dispersing" the control points, for which you can just use transposed convolution. Is this what you meant to do in your interpolate function? i don't understand why you wanted the resize layer... @tvercaut (Copying in Tom so i'm talking nonsense here ;) )

acasamitjana commented 4 years ago

The registration network will output a low-res DDF which is specified by control point spacing;

Yes, I agree. I just thought it is easier (and more flexible) to leave the backbone network as it is and then downsample the output to the specified size. But you can also use a network that directly outputs a low-res DDF of the desired size. Is that what you mean?

Now you want interpolate to get those values in between, after "dispersing" the control points, for which you can just use transposed convolution. Is this what you meant to do in your interpolate function?

That's right. I was in this direction, now I gave it another thought and it's more clear to me how to implement it.

acasamitjana commented 4 years ago

I have made several commits. I think this is the most flexible option but open to discuss any other implementation @YipengHu In summary:

1.- Important: in the config files (.yaml) we now need to specify not only the method name but also some parameters, such that control point spacing. This will need a bit of discussion for backward compatibility. 2.- I created a BSplines layer that (a) assumes that the input is a full-sized volume from the backbone network, (b) resize it to the number of control_points + 3, (c) computes the filters offline, (d) performs the interpolated using a transposed convolution with stride=control_point_spacing. 3.- I added some unit tests. 4.- It's float32 and I think that we may accumulate errors in the filter coefficients when the spacing between control points is large