Open AdvancedImagingUTSW opened 3 months ago
How can we begin to do analysis on multi-channel Z-stacks?
We would do the analysis between z-stack acquisitions. I assume it will be much slower. I just need a way to grab the volume and then feed the results (e.g., positions) into the next feature.
Presumably, this is all done in a blocking mode. I imagine doing this asynchronous would add significant complexity, but I am open to it if you think it is the way to go. In that case, we would have to move to the next position or z-stack while evaluating the previous one, and then react accordingly once the analysis is done.
Even more explicit, find features in a large 3D volume acquired at the mesoscale, switch to the nanoscale and interrogate them locally. Can be run as a feature. Preferably the image is in RAM, but it can also be read from disk if that is the most practical. That spooled image writer could come in handy here too.
It is worth noting that 3D analysis is a RAM intensive and slow endeavor. I would expect 4x RAM overhead. So if we have a volume that is 4GB, should probably make sure that we can handle 16GB of processing or so. With the right GPU, the analysis could also implement CUDA-based routines like CuPy...
If we are going to load from disk, we might need to implement a few more image readers. Some already exist. TIFF, N5, OME-Zarr, HDF5... It would be powerful to be able to load the data at different resolutions if it is N5, OME-Zarr or HDF5. Tiff would have to be loaded and down-sampled afterwards.
We want people to be able to load their own 3D analysis feature, and we can also select from a few of our own that are already implemented and available for selection. Where would we begin to save 3D python functions?
We want to be able to output the positions identified to the multi-position table so that we can switch modes.
We can also save the analysis results back to disk. For OME-Zarr and N5, data could actually be saved within the same folder hierarchy so that any data derived from the raw data is also present. For now, just save the analysis result as a 3D tiff file, but in the future it would be nice to do this properly.
We could also do an analysis plugin, which would enable us to have new dependencies (e.g., for GPU-accelerated analysis). For now, I plan to just use numpy and scikitimage, which are already dependencies.
Will only be used in combination with a z-stack. The z-stack could be multi-channel, however. Could also be a part of a multi-position acquisition.
I am not certain how we have historically implemented the multi-resolution settings. For example, is the offset between the two microscopes defined as the middle of the image, or the corner?
For example, let's say I have an image volume that is 2048 x 2048 x 512, and I find two objects in it. How do I map the coordinates in pixel space and stage space for the low-resolution unit, to the stage space in the high-resolution unit? I want the identified objects to be centered in the high-resolution z-stack acquisition that will follow...
For the low-resolution scan, the analysis results, which will be a binary image, can be saved in a sub-directory with the original data. Cell1/analysis/CH00_000000.tif, etc.
For the high-resolution scan, this will be in a separate path, simply because this is how the multi position already works. If you don't think this is a good idea, we can adjust. Cell2/position1/CH00000000.tif Cell2/position2/CH00...
The low-resolution Z-stack will have a relatively small step size. Ideally, the step size should be around 1 micron in order for the data to be properly Nyquist sampled. So you can image having a volume that is 2048 x 2048 x 2048 voxels for the low-resolution side.
The step size for the high-resolution Z-stack is typically around 167 or 200 nm.
Currently, our software looks at each frame as it comes off the camera and performs image analysis. In some instances, it might be best to look at a volume instead of a plane. And Dushyant wanted the ability to look at a certain number of frames in history. So some more abstract way to control the number of images that we evaluate would be nice.