After multiple years of use, we have now moved our virtual staining pipeline to pytorch-lightning
and ome-zarr
data format to improve the speed of experimentation and deployment. We will be integrating our virtual staining models with other models for segmentation and phenotyping. Check out the new pipeline at viscy.
microDL is a deep learning pipeline for efficient 2D and 3D image translation. We commonly use it to virtually stain label-free images, i.e., to predict fluorescence-like images. Label-free imaging visualizes many structures simultaneously. Virtual staining enables identification of diverse structures without extensive human annotation - the annotations are provided by the molecular markers of the structure. This pipeline was originally developed for 3D virutal staining of tissue and cell structures from label-free images of their density and anisotropy: https://doi.org/10.7554/eLife.55502. We are currently extending it to enable generalizable virtual staining of nuclei and membrane in diverse imaging conditions and across multiple cell types. We provide a computationally and memory efficient variant of U-Net (2.5D U-Net) for 3D virtual staining.
You can train a microDL model using label-free images and corresponding fluorescence channels you want to predict. Once the model is trained using the dataset provided you can use the model to predict the same fluorescence channels or segmneted masks in other datasets using the label-free images.
In the example below, phase images and corresponding nuclear and membrane stained images are used to train a 2.5D U-Net model. The model can be used to predict the nuclear and membrane channels using label-free phase images.
microDL allows you to design, train and evaluate U-Net models. It supports 2D U-Nets for 2D image translation and 2.5D (3D encoder, 2D decoder) U-Nets for 3D image translation. Our goal is to enable robust translation of images across diverse microscopy methods. microDL consists of three modules that are accessible via CLI and customized via a configuration file in YAML format: * [Preprocessing](micro_dl/preprocessing/readme.md): normalization, flatfield correction, masking, tiling * [Training](micro_dl/train/readme.md): model creation, loss functions (w/wo masks), metrics, learning rates * [Inference](micro_dl/inference/readme.md): on full images or on tiles that can be stitched to full images Note: microDL also supports 3D U-Nets and image segmentation, but we don't use these features frequently and they are the least tested. ## Getting Started ### Introductory exercise from DL@MBL If you are new to image translation or U-Nets, start with [slides](notebooks/dlmbl2022/20220828_DLMBL_ImageTranslation.pdf) from the didactic lecture from [deep learning @ marine biological laboratory](https://www.mbl.edu/education/advanced-research-training-courses/course-offerings/dlmbl-deep-learning-microscopy-image-analysis). You can download test data and walk through the exercise by following [these instructions](notebooks/dlmbl2022/README.md). ### Using command line interface (CLI) Refer to the [requirements](#requirements) section to set up the microDL environment. Build a [docker](#docker) to set up your microDL environment if the dependencies are not compatible with the hardware environment on your computational facility. Format your input data to match the microDL [data format](#data-format) requirements. Once your data is already formatted in a way that microDL understands, you can run preprocessing, training and inference in three command lines. For config settings, see module specific readme's in [micro_dl/preprocessing](micro_dl/preprocessing/readme.md), [micro_dl/training](micro_dl/train/readme.md) and [micro_dl/inference](micro_dl/inference/readme.md). ```buildoutcfg python micro_dl/cli/preprocessing_script.py --config