Closed zhenyu202020 closed 2 years ago
Hi @zhenyu202020 ,
You can think of the (1+gamma) in the code is equivalent to the gamma in the paper/figure. The "1" here is to preserve the identity of features. During training, gamma and beta are learnable weights and will be initialized to a numer that close to zero. The "1" preserves the identity of features and hence stabilizes the training.
Thanks for the fast response! I have solved my confusion. Thank you very much!
Hi @cwmok. Sorry to bother you again. I don't quite understand why "F_X_Y * norm_vector" and what does “norm_vector” stand for? Looking forward to your reply!
Hi @zhenyu202020,
You may notice, unlike VoxelMorph, our methods output a normalized deformation field, i.e., within [-1, 1]. The F_X_Y
refers to the displacement vector field (F) that aligns image "X" to "Y" space. norm_vector
transforms the normalized displacement vector field to an unnormalized one, similar to the output of VoxelMorph. Note that, the correct multiplier that transforms a normalized deformation field to an unnormalized one should be equal to (image_dim - 1)/2. But, during the experiments in our paper, all the results use (image_dim-1). Therefore, we keep it as (image_dim-1). You should keep that in mind when you want to visualize the deformation field/obtain an unnormalized deformation field.
Thank you for your patience in replying!
Hi @cwmok. Thank you so much for your work! In your code(line 410 in miccai2021_model.py) ,the equation of the conditional instance normalization is "out = (1. + gamma) out + beta",but in your paper, the equation of the conditional instance normalization "out = gamma out + beta". And in fig.1(b) of your paper, the equation of the conditional instance normalization is also "out= gamma * Instance Norm(self.norm(input)) + beta" I am confusing about that. Looking forward to your reply!