WayScience / phenotypic_profiling

Machine learning for predicting 15 single-cell phenotypes from cell morphology profiles
Creative Commons Attribution 4.0 International
2 stars 3 forks source link

DOI

Phenotypic Profiling

Scientists can now routinely extract high-content, high-dimensional cell morphology representations from microscopy images. However, these cell morphology features currently represent a "hidden code" that must be further interpreted in order to understand and assign biological meaning.

Here, we hypothesize that nuclear morphology can provide a window into single cell phenotype that can be broadly applied across cell types, treatments, and experimental designs (staining, microscopy acquisition parameters, etc.). We test this hypothesis by training machine learning models to predict specific phenotypes from easily-accessible and reproducible single-cell morphology representations.

Specifically, we use publicly-available data from the MitoCheck consortium, which includes 2,916 single-cells labeled with one of 15 different phenotypes, to train a multiclass logistic regression model to predict phenotype. We extracted CellProfiler and DeepProfiler features from all MitoCheck nuclei. See https://github.com/WayScience/mitocheck_data for details on how we accessed and processed these data.

We focused on assessing the generalizability of this approach to predict phenotype in new datasets not seen during model training. We tested generalizability performance in two scenarios: (1) Leave one image out analysis and (2) Predicting single-cell phenotype in the JUMP-CP Pilot data.

Figure 1 describes the dataset and our approach for training and evaluating our model.

main_figure_1

Figure 1. Dataset and analysis approach. (A) Single-cell counts per labeled phenotype stratified by phenotype category. The labeled MitoCheck dataset included a total of 2,916 single nuclei. The original dataset contained labels for 16 classes, but we have removed “folded” because of low counts. (B) Our analysis pipeline incorporated image analysis, image-based profiling, and machine learning. We also assess model generalizability through a leave-one-image-out analysis and applying our models to the Joint Undertaking in Morphological Profiling Cell Painting (JUMP-CP) pilot dataset.

Environment Setup

Perform the following steps to set up the phenotypic_profiling environment necessary for processing data in this repository.

Step 1: Create Phenotypic Profiling Environment

# Run this command to create the conda environment for phenotypic profiling

conda env create -f phenotypic_profiling_env.yml

Step 2: Activate Phenotypic Profiling Environment

# Run this command to activate the conda environment for phenotypic profiling

conda activate phenotypic_profiling

Repository Structure:

The repository structure is as follows:

Order Module Description
0.download_data Download training data Download labeled single-cell dataset from mitocheck_data
1.split_data Create data subsets Create training and testing data subsets
2.train_model Train model Train ML models on combinations of features, data subsets, balance types, model types
3.evaluate_model Evaluate model Evaluate ML models on all data subsets
4.interpret_model Interpret model Interpret ML model coefficients
5.validate_model Validate model Validate ML models on other datasets
6.single_cell_images Single cell images View single cell images and model interpretation
7.figures Figures Create paper-worthy figures

Data

Specific data download/preprocessing instructions are available at: https://github.com/WayScience/mitocheck_data. This repository downloads labeled single-cell data from a specific version of the mitocheck_data repository. For more information see 0.download_data/.

We use the following 2 datasets from the mitocheck_data repository:

Supplementary Table 1 - Full list of JUMP-CP phenotype enrichment

We report the top 100 most enriched treatments per phenotype in Supplementary Table 1 of our paper. See jump_compare_cell_types_and_time_across_phenotypes.tsv.gz for the full list.

Machine Learning Models

We use Scikit-learn (sklearn) for data manipulation, model training, and model evaluation. Pedregosa et al., JMLR 12, pp. 2825-2830, 2011 describe scikit-learn as a machine learning library for Python. Its ease of implementation in a pipeline makes it ideal for our use case.

We consistently use the following parameters with many sklearn functions:

We use seaborn for data visualization. Waskom, M.L., 2021 describe seaborn as a library for making statisical graphics in python.

All parts of the machine learning pipeline are completed with the following feature types:

See MitoCheck_data for more information on feature types and how they are extracted from MitoCheck labeled cells.

We create two versions of the same machine learning model:

Intermediate Data

Throughout this repository, we store intermediate .tsv data in tidy long format, a standardized data structure (see Tidy Data by Hadley Wickham for more details). This data structure makes later analysis easier.

Some intermediate data used in this repository are too large to be stored on GitHub. These intermediate data are available on the Way Lab Zenodo page.

Reproducibility

Specific code and steps used are available within each module folder.

The Way Lab always strives for readable, reproducible computational biology analyses and workflows. If you struggle to understand or reproduce anything in this repository please file an issue!