Problem
Currently, we implement following compute in the processing step of the pipeline:
QLIPP: pixel-wise reconstruction of retardance and deconvolution of phase from brightfield.
Fluorescence Deconvolution: deconvolution of fluorescence alone
PhaseFromBF: deconvolution of phase from brightfield stack alone.
If QLIPP or PhaseFromBF modes are chosen, fluorescence deconvolution is considered a post-processing step.
This has been a) confusing to the user, and b) affects the efficiency of compute and data i/o.
A specific example of this is #81.
Proposed solution
The user should be presented a simple and cleaner set of choices.
One choice is between 2D vs 3D reconstruction. Most experiments involve imaging a sample that is either thinner than the depth of field or thicker than the depth of field. The spatio-temporal dimensions are consistent across the channels, most of the time.
The second choice is between types of reconstructions. User may need any or all of the following reconstructions.
Retardance and Orientation: pixel-wise or deconvolution.
Phase from Brightfield.
Fluorescence Deconvolution.
Under the hood, the pipelines need to be optimized to enable the implementation of above reconstructions in arbitrary combinations. Some of the above modes should be disabled if the relevant channels are not found in the input.
An idea for clean implementation is that
user can run only one type of reconstruction from UI at a time if the relevant input channels are present in the zarr store.
new channels can be added to an existing zarr store.
Alternatives you have considered, if any
To develop efficient pipelines along the above lines, @talonchandler and @lihaoyeh have written scripts using waveorder reader, writer, and compute modules. These scripts are also amenable to parallelization with multiprocessing. Once these scripts are developed, we can incorporate them in the pipeline.
Problem Currently, we implement following compute in the processing step of the pipeline:
This has been a) confusing to the user, and b) affects the efficiency of compute and data i/o. A specific example of this is #81.
Proposed solution The user should be presented a simple and cleaner set of choices.
One choice is between
2D
vs3D
reconstruction. Most experiments involve imaging a sample that is either thinner than the depth of field or thicker than the depth of field. The spatio-temporal dimensions are consistent across the channels, most of the time.The second choice is between types of reconstructions. User may need any or all of the following reconstructions.
Under the hood, the pipelines need to be optimized to enable the implementation of above reconstructions in arbitrary combinations. Some of the above modes should be disabled if the relevant channels are not found in the input. An idea for clean implementation is that
Alternatives you have considered, if any To develop efficient pipelines along the above lines, @talonchandler and @lihaoyeh have written scripts using waveorder reader, writer, and compute modules. These scripts are also amenable to parallelization with
multiprocessing
. Once these scripts are developed, we can incorporate them in the pipeline.