We introduce Spatial Transcriptomics ANomaly Detection and Subtyping (STANDS), an innovative computational method capable of integrating multimodal information, e.g., spatial gene expression, histology image and single cell gene expression, to not only delineate anomalous tissue regions but also reveal their compositional heterogeneities across multi-sample spatial transcriptomics (ST) data.
The accurate detection of anomalous anatomic regions, followed by their dissection into biologically heterogeneous subdomains across multiple tissue slices, is of paramount importance in clinical diagnostics, targeted therapies and biomedical research. This procedure, which we refer to as Detection and Dissection of Anomalous Tissue Domains (DDATD), serves as the first and foremost step in a comprehensive analysis of tissues harvested from affected individuals for revealing population-level and individual-specific factors (e.g., pathogenic cell types) associated with disease developments.
STANDS is an innovative framework built on a suite of specialized Generative Adversarial Networks (GANs) for seamlessly integrating the three tasks of DDATD. The framework consists of three components.
Component I (C1) trains a GAN model on the reference dataset, learning to reconstruct normal spots from their multimodal representations of both spatial transcriptomics data and associated histology image. Subsequently, the model is applied on the target datasets to identify anomalous spots as those with unexpectedly large reconstruction deviances, namely anomaly scores.
Component II (C2) aims at diminishing the non-biological variations (e.g. batch effects) among anomalies via aligning target datasets in a common space. It employs two cooperative GAN models to identify pairs of reference and target spots that share similar biological contents, based on which the target datasets are aligned to the reference data space via “style-transfer”.
Component III (C3) fuses the embeddings and reconstruction residuals of aligned anomalous spots to serve as inputs to an iterative clustering algorithm which groups anomalies into distinct subtypes.
STANDS is developed as a Python package. You will need to install Python, and the recommended version is Python 3.9.
You can download the package from GitHub and install it locally:
git clone https://github.com/Catchxu/STANDS.git
cd STANDS/
python3 setup.py install
STANDS offers a variety of functionalities, including but not limited to:
Before starting the tutorial, we need to make some preparations, including: installing STANDS and its required Python packages, downloading the datasets required for the tutorial, and so on. The preparations is available at STANDS Preparations. Additionally, when dealing with multimodal data structures involving both images and gene expression matrices, we strongly recommend using a GPU and pretraining STANDS on large-scale public spatial transcriptomics datasets. This ensures faster execution of STANDS and improved performance in modules related to image feature extraction and feature fusion.
Finally, more useful and helpful information can be found at the online documentation and tutorials for a quick run.
Please see the tutorial for more complete documentation of all the functions of STANDS. For any questions or comments, please use the GitHub issues or directly contact Kaichen Xu at the email: kaichenxu358@gmail.com.
@article{xu2024detecting,
title={Detecting anomalous anatomic regions in spatial transcriptomics with {STANDS}},
author={Xu, Kaichen and Lu, Yan and Hou, Suyang and Liu, Kainan and Du, Yihang and Huang, Mengqian and Feng, Hao and Wu, Hao and Sun, Xiaobo},
journal={Nature Communications},
volume={15},
number={1},
pages={8223},
year={2024},
publisher={Nature Publishing Group UK London}
}