stephenslab / susieR

R package for "sum of single effects" regression.
https://stephenslab.github.io/susieR
Other
169 stars 42 forks source link

Feature request summary lbf #216

Open jerome-f opened 5 months ago

jerome-f commented 5 months ago

@pcarbo I wanted to check if adding summary lbf across single_effects is feasible. Right now I pick column wise maximum of the lbf_matrix variable.

pcarbo commented 5 months ago

@jerome-f Sorry, I'm not clear on what you are asking for, and I'm not sure what is lbf_matrix. Do you mean lbf_variable? Could you provide a bit more detail? An example might help.

jerome-f commented 5 months ago

Hey Peter, sorry there was a typo I meant lbf_variable matrix. What I am looking for is one lbf for each snp, right now we get lbf_variable vector for each snp across L. I am trying to sort of meta-analyze the credible sets reported across models (FINEMAP and SuSIE-RSS) using BMA. As you'd be aware susie and finemap don't always agree 1:1 on credible set configurations or PIPs. But by averaging across models you can quantify uncertainty around the specific snp.

Best Jerome

pcarbo commented 5 months ago

@jerome-f The logBFs (res$lbf_variable) are based on a simple association test, so I'm not sure that's what you want if your aim is to compare fine-mapping results across different analyses. I'm not sure what is the right thing to do, but you if want to compare CSs across analyses, the PIPs (res$alpha) are probably closer to what you want, because they compare the evidence for an effect with other candidate SNPs. So for example taking apply(res$alpha,2,max) might be better.

You could also take a look at what Chris Wallace does in coloc, which uses the results of susie for colocalization.

jerome-f commented 5 months ago

@pcarbo Thanks that makes sense. I will check out the coloc code base once again (That's where I looked at first). But broadly speaking given the same data fine mapping using different Bayesian methods will give you some what different credible set configurations and PIPs. When two methods do agree then you can be more certain about the inference but when there is disagreement it would be prudent to reconcile them such that you can attribute a confidence interval around the PIP/credible-sets etc. I haven't seen anyone really do this in the fine-mapping context.