TheDeanLab / navigate

navigate - open source light-sheet microscope controls
https://thedeanlab.github.io/navigate/
Other
23 stars 6 forks source link

Efficient Volume Search #21

Closed AdvancedImagingUTSW closed 1 year ago

AdvancedImagingUTSW commented 2 years ago

Distant goal/idea for the autonomous mode of imaging operation.

The tissues are oddly shaped, so imaging them in a set grid is often a stupid idea (e.g., x0-x1, y0-y1, z0-z1). Thus, we will need an effective way to map out the tissue boundaries in a coarse imaging mode (e.g. with 0.63 or 1x magnification), and then do follow-up imaging at a slightly higher resolution (e.g. 6x magnification). Remember, that the low-resolution arm of the microscope has a motorized zoom servo that can automatically change the magnification of the imaging system.

So, what is the best way to do a search? Perhaps an R-Tree? It is slightly different because we aren't dealing with points, but images, which also need to be analyzed for the presence or absence of tissue. https://en.wikipedia.org/wiki/R-tree

Couple other interesting ideas here: https://blog.mapbox.com/a-dive-into-spatial-search-algorithms-ebd0c5e39d2a

At some point,, we will have to start thinking about this.

AdvancedImagingUTSW commented 2 years ago

We can change the field of view between 20 mm and 2 mm, approximately, and the X, Y, and Z position of the specimen. How do we find the perimeter of the specimen? How fine grained should our search be? What is the ideal sampling (e.g., Nyquist)? Can we determine almost like a mesh, or a contour of the sample? And ultimately, once we have the sample, can we come up with a good way to image it at higher resolutIon so that we only image regions with the sample? @Rapuris @Elepicos

Rapuris commented 2 years ago

I think @Elepicos found a couple of actual algorithms to use. However, since our goal to my understanding is essentially 3D image segmentation, I think a possible approach might be to train a neural network that segments tissue boundaries. There are a lot of papers dealing with this topic for MRI images, but I think we might be able to retrain another model with our data and have it segment the different tissues. If this idea is even a possibility, I can read further into how other models were structured

Rapuris commented 2 years ago

Samir's response:

1: We check in a Striped pattern diagonally across the image, which is a very analog way of doing it but I also have a few more segmentation methods like checking the area near corners and such 2: At least as a starting point, we can perhaps use a conversion to a polar graph to map where there are parts with no data, then use that to see at what points on the original image you start to enter regions that are completely empty 4:30 Within each, there are a few more specific methods, but those are the grander scale ideas 4:31 Especially with the first method, I was working through some segmentation methods earlier, just would have to transcribe them to a digital format to show

AdvancedImagingUTSW commented 2 years ago

Now that @zacsimile has defended his PhD, he will be joining us here and there while finishing up his work in the Northeast. He recommended the following for the efficient volume search.

Sparce Octree: https://github.com/python-microscopy/python-microscopy/blob/master/PYME/experimental/_octree.pyx.

KDTree: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html

Apparently they are same, but one is for sparse data. He recommends that if RAM becomes an issue, we can move to the sparse KDTree.

AdvancedImagingUTSW commented 2 years ago

I am uploading some data here: /endosome/archive/MIL/metastasis_project/20220316_tiling_test

Look at the meta data to get an idea of the pixel size and positioning of the images. It was acquired in a tiling format.

AdvancedImagingUTSW commented 2 years ago
AdvancedImagingUTSW commented 2 years ago

What is our resolution for finding the feature. Let's say in our 1D example, that we see tissue at 37,500, but not 40,000. When do we stop cutting distances in half? Let's pseudo arbitrarily say 2 mm. Why? 6.5 microns pixel size, and 2048 pixels in both x and y, and have a 6x magnification on the higher end of our low resolution microscope. FOV = 2.2 mm.

AdvancedImagingUTSW commented 2 years ago

How do we do the second dimension?

Let's say Y goes from 0 to 50,000.

Started (25,000, 25,000). We did a measurement at (12,500, 25,000), and (37,500, 25,000).

Logically, next measurement should be (25,000, 12,500), then (25,000, 37,500).

..........

Workflow that I visualized early on was that we use our 1x magnification to find the tissue boundaries. 22 mm FOV.

There is the possibility that we can then find the surface from those images.

Then, using the 6x magnification, we will exhaustively image within that surface and find every potential metastatic site.

We log the sites. And then we go with the 35x magnification microscope, and image those sites.

AdvancedImagingUTSW commented 1 year ago

We can discuss this today.

The goal is to find a way to determine the tissue boundaries at low magnification (e.g., 1x), and then create the best mechanism for measuring within those boundaries at 6x magnification. We will explore the sample using a KD-tree (or similar), and then use computer vision to evaluate whether or not there is tissue at the location (it could be using your Ilastik feature, or another feature that we can implement). We want to be able to switch between detection modes.

For a 1D search, I imagine it as a while loop. Let's say the stage can go from 0 to 50 mm, and we start at 25 mm.

minimum_stage_step = 0.5
stage_limit = [0, 50]
while tissue_is_present is True:
     current_position = 25
     step_size = stage_limit[1]-current_position
     if step_size > minimum_stage_step:
          move_stage(current_position + step_size/2)
          tissue_is_present = detect_tissue(acquire_image)
    else:
            ...

This is clearly not hashed out very well, but we can get creative!

AdvancedImagingUTSW commented 1 year ago

image image

If we were to automatically place the sample into the microscope with a robot, would we be able to find it? This is a binary search problem, right? If no sample, we have to go look for it. And how best can we go look for it.

Once we find it, can we handle if parts of it are outside of the field of view? If outside of the field of view, then we need to figure out where the real boundary is.

Then once we know where the full boundary of the tissue is in X, Y, and Z, can we come up with an efficient way to map and image that at different resolutions (2X, 6X, High-Resolution), etc.

AdvancedImagingUTSW commented 1 year ago

The most common problem that people face is when the sample is much bigger than the field of view. During our discussion today, I mentioned this paper: https://www.science.org/doi/10.1126/science.aau8302

Here, on page 10 of the Supplementary Information, they describe their approach:

For samples with rectangular imaging volumes, we typically specified a rectangular parallelepiped as hard limits for the tiled volume. The microscope software covered the volume with a 3D matrix of rectangular tiles with the desired tile overlap. For smaller data sets, the tile overlap was set at 4 μm in x/y and 8 μm in z. For larger data sets more affected by sample shrinkage over time or position errors near the limits of stage travel, the overlap was increased to, for example, 6 μm in x/y and 12 μm in z.

For large samples with non-rectangular extents, we chose to implement a strategy to avoid imaging tiles that had no signals. Termed “intelligent tiling”, the software automatically determined tiles that had signals, and then progressively imaged the neighboring tiles, stopping when there were no signals. We found it was most efficient to image a volume by minimizing the stage motion, especially the z stage motion. Therefore, all of the tiles at a single z stage position (i.e. the tiles in the same z slab) were imaged following a serpentine pattern before moving to the next z slab.

In detail, we first specified a rectangular parallelepiped as hard limits for the tiled volume and the microscope software covered the volume with a 3D matrix of rectangular tiles with the desired tile overlap as described. These tiles became the “candidate tile” list. Next, we found a tile of interest and set it as the “seed” tile where the software would begin imaging at. The seed tile initialized the “working on” queue. The microscope then began imaging tiles from the “working on” queue. If an imaged tile had signals, then its bordering six tiles were removed from the “candidate tile” list and placed in a “next batch” list. Whenever the “working on” queue was empty or the z position was about to switch, all the tiles in the “next batch” list and “working on” queue were combined and then sorted according to serpentine paths within each z slab and added to the “working on” queue. The z slabs were sorted from their distance to the seed point.

A tile was determined to have signals based on the set thresholds. Briefly, we set the “pixel intensity threshold” and the “count threshold”, as well as which emission channel to be used for the signal checking. During imaging of a tile, the number of pixels above the “pixel intensity threshold” were counted for every camera image with the specified emission channel. If the count was ever greater than the “count threshold”, the tile was considered to have signals. This “intelligent tiling” implementation was typically robust enough to automatically capture the signal of interest and follow the specimen contour. We could also view the data as it was acquiring and direct the imaging along a particular direction by circling areas in a reviewer tool to add tiles (or prevent tiles from being imaged) if needed.

Autofocus was performed on a 200 nm diameter fluorescent bead located on the sample surface every 10-30 minutes for selected samples that required long-term imaging (~days). During each autofocus measurement, the bead was precisely located using a normal imaging volume sweep. The light sheet was statically held at the bead, while the objective piezo was swept. The fluorescence intensity as function of a piezo position was fitted with a Gaussian curve and the peak center gave the correct piezo offset to use.

zacsimile commented 1 year ago

Annie addressed a fair bit of this with #284.

AdvancedImagingUTSW commented 1 year ago

This is obviously working, but there will be a few tweaks to finalize it.