Open vreuter opened 6 months ago
For cross-correlation, we have the relation that offset = reference_image - offset_image
: https://scikit-image.org/docs/stable/auto_examples/registration/plot_register_translation.html#image-registration
This means that using an observed bead image and a reference bead image, we get an offset that represents how to recover a hypothetical original spot image (at time of reference) from an empirical spot image (not at time of reference). Then hypothetical = drift + empirical
.
See below for docs:
Our offset / shift / drift vector for the difference between centroids of Gaussian fits is here:
https://github.com/gerlichlab/looptrace/blob/5b783bc5ea78cc2611329cfb0b4554caeb8dc2d6/looptrace/Drifter.py#L129-L134
which accords with the scikit-learn
implementation of cross-correlation, good!
As noted in #130, the coarse drifts are used for extracting spot images for tracing, where each extraction amounts to cutting out an ROI, detected in a regional barcode imaging timepoint, from another imaging timepoint. We then need to find the shift of the timepoint from which to extract relative to the timepoint in which the ROI-defining spot was detected.
In particular, we're computing coordinates for an extraction in an offset (relative to a spot detection timepoint) space. So again keeping with the offset = reference_image - offset_image
equation, we want to compute something like offset_coords_vector = reference_coords_vector - (relative_)offset
. We do that here:
where we have something like:
coarse_drift = int(frame_drift) - int(ref_drift)
= int(ref_time_coord - locus_time_coord) - int(ref_time_coord - region_time_coord)
= int(region_time_coord) - int(locus_time_coord)
= offset with region_time as reference, locus_time as moving
==> target_min = roi_min - coarse_drift
~=~ offset_image = reference_image - offset
Hence the coarse_drift
we compute there acts appropriately to compute coordinates in the locus-specific timepoint space from coordinates in the ROI detection (regional barcode imaging timepoint) space, by virtue of analogy with the equation used to compute the coarse drift correction itself. This is the same for target_max
.
Fine-scale drift correction also accords with drift = ref - mov
, as we recover the images in "ref" space,
so ref = mov + drift
:
https://github.com/gerlichlab/looptrace/blob/5b783bc5ea78cc2611329cfb0b4554caeb8dc2d6/looptrace/Tracer.py#L341-L346
Coarse drift correction information is also used in single-bead extraction: https://github.com/gerlichlab/looptrace/blob/5b783bc5ea78cc2611329cfb0b4554caeb8dc2d6/looptrace/bead_roi_generation.py#L68
Updated -- to test:
Original: In particular, check that the addition/subtraction would be the same, e.g.
template = offset + drift
ortemplate = offset - drift
would be the same in either case.