Open oligomyeggo opened 2 years ago
Hi @oligomyeggo,
I have not previously re-normalized only because I am subsetting the data. I think there's a difference between reproducibility and optimal processing here. In general, normalization aims to remove effects of sequencing depth differences. I would assume this works fairly well in most cases with scran, also across the full dataset. The main question here is whether the assumption that fewer than 50% of genes are differentially expressed (either between coarse cluster centroids or between cells in a coarse cluster) is violated or not. Even if this is violated, if they are equally up- or down-regulated in these comparisons, scran could still be okay. If this assumption is not violated, then I don't think you need to re-normalize. If it is, then maybe normalizing a subset of the data would be useful. In that case you will not be able to compare cells across different subsets though as your normalization baseline is different.
If you really want to re-normalize, you don't need to set the counts again... but everything else I would do. Also note, that scran will genereate its own coarse clustering if you don't do it for the method. I did it at the time as i thought it might be better via louvain and logCPM than the original cosine-normalized implementation.
Hi @LuckyMD, I've been reprocessing some old data using your single cell tutorial workflow and have a best practices question (I am not sure if this is the correct place for this, or if this should be moved to the new scverse discourse group?). I have an adata object that is scran normalized. I want to take a subset of clusters from the adata object and create a new adata_sub object with its own dimensional reduction to investigate a subpopulation of interest. My understanding is that, had I opted for a basic log normalization, I would not need to re-normalize adata_sub as log normalization is done on a per-cell basis. However, because scran normalization uses a coarse clustering of cells present in the object, I would want to re-normalize adata_sub if adata had been normalized via scran, correct? What would be the best way to subset and re-normalize adata_sub? I am not sure what steps of the original scran normalization process need to be repeated and what steps can be omitted. For instance, I would want to perform a new clustering for the subsetted data for scran normalization and get new size factors, but I wouldn't need to set
adata_sub.layers['counts'] = adata_sub.X.copy()
, as adata_sub contains a subsettedcounts
layer from adata (correct?). Would I need to restoreadata_sub.raw = adata_sub
in this scenario?:Thank you for any help and advice!