Closed Lancial closed 2 years ago
a follow-up question, why do you scale a new displacement field predicted at each level with range_flow=0.4
?
Hi @Lancial,
Because we are predicting the normalized displacement field in LapIRN. We follow the "grid_sample" function in Pytorch to normalize the spatial dimensions.
Specifically, consider the displacement field in x-direction, a magnitude 1 in the displacement field will equal to shifting 1 pixel, while a magnitude 1 in the normalized displacement field will equal to shifting (x's dimension - 1) pixels.
This is also the reason that we need to transform the normalized displacement field into the displacement field before calculating the smoothness regularization, see code for more details.
range_flow=0.4
means the range of magnitude of the predicted normalized displacement field is in [-0.4, 0.4].
Thank you! I understand it now. Is there any reason you picked 0.4?
No particular reason. Empirically, 0.4 is large enough for most of the medical image registration tasks. If you are doing intra-subject registration/problem with small initial misalignments, feel free to lower the number.
I will close the issue. If you have any further questions, feel free to open a new one.
Hi! I'm trying to use the displacement version of your method but found something that's a bit confusing.
When you take the result from a higher level and pass it to a lower level for refinement, you only trilinear unsample the dimension of a displacement field but leave its magnitude unchanged. Doesn't this affect the grid sampler behavior as the sampled displacement no longer has the same physical scale as before?
lvl1_disp, _, _, lvl1_v, lvl1_embedding = self.model_lvl1(x, y)
lvl1_disp_up = self.up_tri(lvl1_disp)
warpped_x = self.transform(x_down, lvl1_disp_up.permute(0, 2, 3, 4, 1), self.grid_1)
If above description is confusing, I guess my question is simply why not
lvl1_disp_up = self.up_tri(lvl1_disp) * 2
?Maybe I have missed some other things in your code.. Can you help me out?