Open reubenthomas opened 1 year ago
Could you give a clarification on exactly how weights were handled for each of the 4 different limma methods in the plot? It's not clear how the methods mentioned in your comment map to the names in the plot.
Sure,
limma-voom-muscat: is what is implemented in this package
The main differences in the code for each of the other limma-voom implementations are:
limma-voom-orig
v <- voom(Y, design = mm)
fit <- lmFit(v, mm)
limma-voom-sampleQuality
##with sample quality
v <- voomWithQualityWeights(Y, design = mm)
fit <- lmFit(v, mm)
limma-voom-cellWeights
##with ncell weights
v <- voom(Y, design = mm, weights = weights)
fit <- lmFit(v, mm, weights = v$weights * weights)
where the weights
variable is equivalent to that defined in muscat based on the number of cells for a given sample in the respective cluster
Please let me know if this is still not clear
That's clear to me. Thanks!
By the way, one additional possibility that you haven't tried in the benchmark is what dreamlet does, which is pass the cell count weights to voom (or voomWithDreamWeights), but then proceed as in limma-voom-orig. My expectation is that this should perform almost identically to limma-voom-orig in this benchmark, but I'm not certain. Going based on the code examples you provided, it would look like:
v <- voom(Y, design = mm, weights = weights)
fit <- lmFit(v, mm)
Yet another theoretically reasonable option would be to use the cell count weights to account for sequencing depth and then skip voom and use eBayes with trend = TRUE instead, since this is what the limma authors recommend when you want a trend that doesn't account for differences in sequencing depth:
fit <- lmFit(Y, mm, weights = weights)
fit <- eBayes(fit, trend = TRUE)
I have no idea how this will perform relative to the other options you've tried.
Hello,
This is directly related to issue 119.
We evaluated muscat's implementation of limma-voom. It seems it ignores the observation weights from voom and solely uses weights for each sample (based on the number of cells for that sample in a given cluster). We tested the Type I error using different versions of limma-voom (the original one), the version with sample quality weights and one based on what you tried to implement in muscat (where we multiplied the observation weights estimated by voom by the number of cells per subject per cluster). We used a single cell data set with 48 subjects and tested for (pseudo-bulked) differential expression between two groups (24 in each). The analysis we performed was based on 100 random permutations of the sample labels. The results for each method in terms of box plots of error rates (at a Type I error rate of 5%) across the 15 clusters in that data set.
The conclusion relevant to muscat is that your implementation has elevated false-positives.
Thanks, -Reuben