Open vishnu-vasan opened 9 months ago
This task has been moved to task_cell_cell_communication.
The API files and datasets have been created but the methods and metrics still need to be implemented.
@vishnu-vasan @dbdimitrov Would you be available for a meeting to discuss a timeline for the next steps, so we can create a new release of this task?
Task motivation
The growing availability of single-cell data has sparked an increased interest in the inference of cell-cell communication (CCC), with an ever-growing number of computational tools developed for this purpose.
Different tools propose distinct preprocessing steps with diverse scoring functions that are challenging to compare and evaluate. Furthermore, each tool typically comes with its own set of prior knowledge. To harmonize these, Dimitrov et al, 2022 recently developed the LIANA framework, which was used as a foundation for this task.
Task description
The challenges in evaluating the tools are further exacerbated by the lack of a gold standard to benchmark the performance of CCC methods. In an attempt to address this, Dimitrov et al use alternative data modalities, including the spatial proximity of cell types and downstream cytokine activities, to generate an inferred ground truth. However, these modalities are only approximations of biological reality and come with their own assumptions and limitations. In time, the inclusion of more datasets with known ground truth interactions will become available, from which the limitations and advantages of the different CCC methods will be better understood.
This subtask evaluates methods in their ability to predict interactions between spatially-adjacent source cell types and target cell types. This subtask focuses on the prediction of interactions from steady-state, or single-context, single-cell data.
Proposed ground-truth in datasets
Mouse brain atlas Tasic et al. 2016: A murine brain atlas with adjacent cell types as assumed benchmark truth, inferred from deconvolution proportion correlations using matching 10x Visium slides (see Dimitrov et al., 2022). 14249 cells x 34617 features with 23 cell type labels.
Initial set of methods to implement
The same set of methods as done in v1:
Proposed control methods
The same set of baseline methods as done in v1
Proposed Metrics
The same set of metrics as done in v1