After chatting with @talonchandler, we have plans to construct the zyx affine transformation by:
1) The YX part of the affine transformation is estimated from the points between the phase and fluorescence ( rotation, scaling, translation).
2) The Z part is just translation and scaling obtained from autofocus or the selected points in the planes and the known microscope parameters (i.e. RF magnification, pixel size, etc) respectively.
These two points will let us return the proper metadata and affine transform to map phase to fluorescence. The quickest fix would be to ensure the user always selects all the points in one z plane per channel (this restricts to just translation) and scaling can be achieved with the current implementation that is based on microscope parameters.
After chatting with @talonchandler, we have plans to construct the zyx affine transformation by:
1) The YX part of the affine transformation is estimated from the points between the phase and fluorescence ( rotation, scaling, translation). 2) The Z part is just translation and scaling obtained from autofocus or the selected points in the planes and the known microscope parameters (i.e. RF magnification, pixel size, etc) respectively.
These two points will let us return the proper metadata and affine transform to map phase to fluorescence. The quickest fix would be to ensure the user always selects all the points in one z plane per channel (this restricts to just translation) and scaling can be achieved with the current implementation that is based on microscope parameters.