Closed orena1 closed 10 months ago
Hi Oren - yes you are absolutely right. For my own registration pipelines, this feature point based alignment was only ever run on images with the same sampling rate, or on images that had already been through one round of alignment so there was a natural way to resample the moving image onto the same voxel grid as the fixed image.
Your suggestion is a very good one - the radius really should be given in physical units. Whichever image has a smaller voxel sampling rate should have a context extracted which is as close as possible to the given physical radius. Then the other image should have the exact same context extracted with interpolation to ensure the number of voxels in both contexts is exactly the same. This preserves the most information without doing anything too expensive.
I would love to build this soon so I'm going to leave this issue open as a feature request. In the meantime, adding the following argument to feature_point_ransac_affine_align
will always ensure it works and makes sense for images with different voxel spacings: static_transform_list=[np.eye(4),]
Thanks very much to @orena1 for providing a good solution to this with PR https://github.com/JaneliaSciComp/bigstream/pull/37 which was merged this afternoon.
https://github.com/GFleishman/bigstream/blob/3cd2e4b217639d09b5e2dd0e169a7c210d9cacef/bigstream/features.py#L26
correct me if I am wrong, but get_spot_context, get the pixel content of each spot, these values are then correlated pixel-by-pixel. But if one image is in a different pixel dimensions, this correlation does not make much sense? right?
I assume a solution will be to inter the radius, so it will be the same in both fix & mov and than interp the pixel values so the dimensions will be the same.
Thanks