Closed zhangchuyi closed 2 months ago
Hi @zhangchuyi,
I suggest you use the function load_ld_npz
(in finemapper.py
) to load the LD matrix into memory (specifically, you need to add the matrix itself and its transpose). The measure is regular r
.
If you just want to use a regular (non-weighted) fine-mapping, you just need an LD matrix. If you want to use Bayesian priors based on functional annotations, you will need LD-scores as explained in detail in the PolyFun paper.
Hope this helps, please let me know if not!
Closing this, please reopen if you need more help
I am currently working on a trans-ancestry (EUR+EAS) meta-analysis and would like to use PolyFun for fine-mapping. Our reviewers have suggested we simulate a sample-size-weighted LD matrix (EUR vs. EAS), and I wanted to consult you regarding the best approach to this.
1. LD Matrix Question I have downloaded the precomputed UKB EUR LD matrix you provided, but I need to adjust it for trans-ancestry analysis by combining it with an EAS LD matrix that we have computed. After downloading and converting the precomputed UKB LD sparse matrix to a dense format for inspection (e.g., chr7_10000001_13000001.npz), I noticed that the diagonal elements are 0.5, while for an LD matrix I computed using PLINK (r²), the diagonal is 1.
a. What measure of LD is being used in the UKB LD matrix provided? (Is it r² or another measure?) Additionally, is there any specific processing or normalization applied to the matrix? b. Could you advise on the best way to combine or weight the EUR and EAS matrices using the metric you've applied?
2.LD Score Question If we proceed with the weighted LD matrix, should we also generate a corresponding LD score file? Would we need to compute this new LD score for annotation and prior probability calculations, or could we continue using the existing LD score files?
Any guidance you can provide would be greatly appreciated. Thank you in advance for your time, and I look forward to your response.