Open ambrosejcarr opened 5 years ago
It's not clear our object model works well for stitching. We would need to pull pixels from one ImageStack to another.
Is there a reason we need to support the application of transforms to both images and spots?
We would need to pull pixels from one ImageStack to another.
I think stitching would always require loading in two imagestacks at a time.
Is there a reason we need to support the application of transforms to both images and spots?
Am I interpreting properly that you'd prefer to only support transforms to spots?
Am I interpreting properly that you'd prefer to only support transforms to spots?
I don't have a preference. I am just stating that the two seem rather redundant. Transformations to pixels is more expensive to do, but honestly, I'm skeptical of the claim that it's a whole lot more expensive since you have to read the data to begin with.
I also think it might be worthwhile coming up with a standard model or file format for transformations.
Finally, would the transforms always be learned off of the images? Would it ever make sense to do it in spot-space?
currently waiting on https://github.com/scikit-image/scikit-image/pull/4023
Image Alignment
Image alignment in starfish is driven by the need to carry out two tasks:
These problems are trivial in cases where there is no movement in the tissue or microscope stage. However, when the images shift between capture, a computational solution must be applied to match the positions of cells and spots across images. In cases where fluorescent spots are very close together, the latter problem can be very challenging to solve.
Absent a solution for these problems, starfish cannot be used by groups whose pipelines do not immediately register and apply the learned transformation to their images. Over half the SpaceTx groups take different approaches, and so enhancing our support for registration will improve starfish's value.
Dependencies
751 Bag of Images
Definitions
Registration: is the process of learning a transformation that maps different sets of data into one coordinate system. In starfish, it is the process of accounting for subtle shifts in the imaging apparatus between image captures. Registration does not refer to applying the transformation.
Stitching: is the process of learning a transformation to combine multiple images with overlapping fields of view to produce a segmented panorama or high-resolution image. Stitching does not refer to applying the transformation.
Transformation: is the process of applying a learned transformation to an image, often to register or stitch it.
Code: A code is a series of fluorescent colors detected over multiple rounds of imaging that results from hybridizing a set of fluorophores to an mRNA molecule in a pattern designed to specifically identify it.
Identify the cell of origin for each spot
This problem is relatively simple, as it involves placing a series of <1um spots inside a cell of 10-30um diameter. In addition, in most cases currently being studied, cells are not directly adjacent, so a small dilation of the cell's area can offset small registration errors. As a result, this problem is typically adequately solved by matching cells across images. The main reasons cell assignment is worth mentioning are:
Match spots across images
Image registration
Spots must be matched across images to build codes to identify mRNA. All coded assays we have examined while building starfish have some amount of movement of the images between rounds. Different types of movement require different analyses to solve. Translation, scaling, and rotation are linear shifts and can all be solved in the Fourier domain of an image, and can resample an image to generate new pixel intensities ("sub-pixel registration").
Spot registration
Shearing an image, however, requires that a set of coordinates be extracted from the source and destination images, and matching those coordinates provides the solution for shearing. Typical approaches to solve these problems use corner detection, which in our data can be replaced by spot detection using fiduciary beads, anchor probes (ISS), or nuclei (rough registration). The spots must be present in all images, but the approach can be made robust to drop-out of a small fraction of spots using a RANSAC algorithm.
The resulting solution to the affine transformation is a coefficient matrix which can be applied either to the image or to the coordinate space. The latter is more efficient to compute, but means that the images will not be aligned, and therefore vectorization or volumetric approaches cannot be applied across unregistered areas, so there is an optimization trade-off.
Matching spots
Even in registered cases, small aberrations in spot position mean that it is often not feasible to find spots in one image and measure intensity in that location in each of the others. Instead, many groups find spots in each round and match them using a local search. In cases where the data are crowded, this local search can be complex, and in the SeqFISH case, the search must be seeded from each round, to identify consensus codes and reduce false positives. Other approaches choose instead to decode pixels, and in order to ensure alignment, blur the images before decoding to spread signal in a local area. However, this has the drawback that it reduces signal:noise.
Applying registration transforms
A learned affine transformation can be applied to either an image or to points of a coordinate grid. Registering spot locations is simple, as it requires only transforming a single coordinate. Registering regions of interest stored as polygons is similar, and involves only transforming the vertices. Registering regions of interest stored as masks is likely more complex and additional research is required to solve this problem.
Spot alignment approaches used in SpaceTx
In-situ sequencing v0
In-situ sequencing v1
MERFISH
MExFISH
BaristaSeq
osmFISH
3d smFISH (Allen)
DartFISH
BioHub
SeqFISH
Summary of strategies
Unified approach for Starfish:
The following model would support most of the above variations:
Implications for starfish object model
Investigations:
Assumptions
Risks
Implementation Requirements
Current implementation:
starfish.image.registration
Notes from previous issues: