Open John1231983 opened 5 years ago
To be fair, all I did was porting the SpatialTransformLayer (which is easy enough, since most functions are the same for tensorflow and pytorch) and create a u-net. In terms of results... I used it for different purposes than what they did on their part with temporal images that don't really have large deformations between them... So, I am not sure how it compares to their implementation.
@marianocabezas : Thanks. Could I ask you some question about the implementation?
theta = self.fc_loc(xs)
theta = theta.view(-1, 2, 3)
grid = F.affine_grid(theta, x.size())
x = F.grid_sample(x, grid)
theta
is 2x3 for 2d image and 3x4
for 3D image. but the voxelmorph paper shows that the deformation filed is a feature maps of unet network size of 3xHxW (H and W is size of input moving images). What is the relationship between theta and the deformation field-df?Thanks
Interesting that you are the first person reimplement voxelmorph in PyTorch. I wonder what is reproduce result? Does it work?