Closed AMCalejandro closed 1 year ago
compute_n = "ldsc" (I get sample_size=NULL: must be valid integer)
This was a bug due to me missing some places where the sample_size
was still being used. I just pushed a change to contruct_colmap
so that is takes the arg N=
instead of sample_size
. N
acts just like the other mapping columns in that it renames a column based on the value supplied to the argument (construct_colmap(N="N_cases")
means that the "N_cases" column will get renamed to "N" and used in any subsequent steps that require total (or effective) sample size.
Note that whenever the "N" col is present in the post-standardized sumstats data, this will be used instead of whatever you supply to compute_n
. I've updated the docs to better explain this.
compute_n = data_munged$NSTUDY (This does not work at all)
This is my bad; I told you yesterday that MungeSumstats::compute_nsize(compute_n=)
can take a vector of sample size. But apparently I misremembered and it can only take a single number to be applied to all rows. Instead, I'll add a line to echodata::get_sample_size
that handle these scenarios:
### Numeric vector
if(is.numeric(compute_n) &&
length(compute_n)>1){
messager("Numeric vector supplied to compute_n.",v=verbose)
if(length(compute_n)!=nrow(dat2)){
stp <- paste(
"When compute_n is a numeric vector,",
"its length must be exactly equal to the number of rows",
"in your summary statistics data (dat)."
)
stop(stp)
} else {
dat2$N <- compute_n
return(dat2)
}
}
Rfast
Rfast
also wasn't installed. I've now ensured that it get automatically installed by making it an Import of echofinemap
.
1.b. compute_n = data_munged$NSTUDY (This does not work at all)
Just so you know, I got to read that the argument could take a vector in Alan's documentation.
I am assuming compute_n is used within MungeSumStats and not within echolocatoR workflow?
The docs there say that you can supply an single integer (not a vector) which is applied to all rows. The only part where it mentions a vector is with the character arguments, in which case it just takes the first element of the character vector, which indicates the strategy you'd like to use for compute (effective) sample size from other columns.
I am assuming compute_n is used within MungeSumStats and not within echolocatoR workflow?
It's used in both. The difference is that echodata::get_sample_size wraps MungeSumstats::compute_nsize
and has some additional input handling strategies including (now) handling integer vectors supplied to compute_n
.
https://github.com/RajLabMSSM/echodata/blob/main/R/get_sample_size.R
I just got this running
Note that:
columnsnames = echodata::construct_colmap(munged= FALSE,
CHR = "CHR", POS = "POS",
SNP = "SNP", P = "P",
Effect = "BETA", StdErr = "SE",
A1 = "A1", A2 = "A2", Freq = "FREQ",
N = "N")
#N_cases = NULL, N_controls = NULL,
#proportion_cases = NULL,
#MAF = "calculate",
#tstat = NULL)
# Pass the sample size as "N" column
# compute_n will do all what is in the docu f N does not exist
finemap_loci(# GENERAL ARGUMENTS
topSNPs = topSNPs,
results_dir = fullRS_path,
loci = topSNPs$Locus,
dataset_name = "LID_COX",
dataset_type = "GWAS",
force_new_subset = TRUE,
force_new_LD = FALSE,
force_new_finemap = TRUE,
remove_tmps = FALSE,
finemap_methods = c("ABF","FINEMAP","SUSIE", "POLYFUN_SUSIE"),
# Munge full sumstats first
munged = FALSE,
colmap = columnsnames,
# SUMMARY STATS ARGUMENTS
fullSS_path = newSS_name_colmap,
fullSS_genome_build = "hg19",
query_by ="tabix",
#compute_n = 3500,
bp_distance = 10000,#500000*2,
min_MAF = 0.001,
trim_gene_limits = FALSE,
case_control = FALSE,
# FINE-MAPPING ARGUMENTS
## General
n_causal = 5,
credset_thresh = .95,
consensus_thresh = 2,
# LD ARGUMENTS
LD_reference = "1KGphase3",#"UKB",
superpopulation = "EUR",
download_method = "axel",
LD_genome_build = "hg19",
leadSNP_LD_block = FALSE,
#### PLotting args ####
plot_types = c("simple"),
show_plot = TRUE,
zoom = "1x",
tx_biotypes = NULL,
nott_epigenome = FALSE,
nott_show_placseq = FALSE,
nott_binwidth = 200,
nott_bigwig_dir = NULL,
xgr_libnames = NULL,
roadmap = FALSE,
roadmap_query = NULL,
#### General args ####
seed = 2022,
nThread = 20,
verbose = TRUE
)
PolyFun submodule already installed.
┌─────────────────────────────────────────────────┐
│ │
│ )))> 🦇 RP11-240A16.1 [locus 1 / 3] 🦇 <((( │
│ │
└─────────────────────────────────────────────────┘
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 1 ▶▶▶ Query 🔎 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
+ Query Method: tabix
Constructing GRanges query using min/max ranges within a single chromosome.
query_dat is already a GRanges object. Returning directly.
========= echotabix::convert =========
Converting full summary stats file to tabix format for fast querying.
Inferred format: 'table'
Explicit format: 'table'
Inferring comment_char from tabular header: 'SNP'
Determining chrom type from file header.
Chromosome format: 1
Detecting column delimiter.
Identified column separator: \t
Sorting rows by coordinates via bash.
Searching for header row with grep.
( grep ^'SNP' .../QC_SNPs_COLMAP.txt; grep
-v ^'SNP' .../QC_SNPs_COLMAP.txt | sort
-k2,2n
-k3,3n ) > .../file2fb2fcecd3b_sorted.tsv
Constructing outputs
Using existing bgzipped file: /home/rstudio/echolocatoR/echolocatoR_LID/QC_SNPs_COLMAP.txt.bgz
Set force_new=TRUE to override this.
Tabix-indexing file using: Rsamtools
Data successfully converted to bgzip-compressed, tabix-indexed format.
========= echotabix::query =========
query_dat is already a GRanges object. Returning directly.
Inferred format: 'table'
Querying tabular tabix file using: Rsamtools.
Checking query chromosome style is correct.
Chromosome format: 1
Retrieving data.
Converting query results to data.table.
Processing query: 4:32425284-32445284
Adding 'query' column to results.
Retrieved data with 76 rows
Saving query ==> /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/RP11-240A16.1/RP11-240A16.1_LID_COX_subset.tsv.gz
+ Query: 76 SNPs x 10 columns.
Standardizing summary statistics subset.
Standardizing main column names.
++ Preparing A1,A1 cols
++ Preparing MAF,Freq cols.
++ Could not infer MAF.
++ Preparing N_cases,N_controls cols.
++ Preparing proportion_cases col.
++ proportion_cases not included in data subset.
Preparing sample size column (N).
Using existing 'N' column.
+ Imputing t-statistic from Effect and StdErr.
+ leadSNP missing. Assigning new one by min p-value.
++ Ensuring Effect,StdErr,P are numeric.
++ Ensuring 1 SNP per row and per genomic coordinate.
++ Removing extra whitespace
+ Standardized query: 76 SNPs x 12 columns.
++ Saving standardized query ==> /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/RP11-240A16.1/RP11-240A16.1_LID_COX_subset.tsv.gz
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 2 ▶▶▶ Extract Linkage Disequilibrium 🔗 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
LD_reference identified as: 1kg.
Previously computed LD_matrix detected. Importing: /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/RP11-240A16.1/LD/RP11-240A16.1.1KGphase3_LD.RDS
LD_reference identified as: r.
Converting obj to sparseMatrix.
+ FILTER:: Filtering by LD features.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 3 ▶▶▶ Filter SNPs 🚰 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
FILTER:: Filtering by SNP features.
+ FILTER:: Post-filtered data: 76 x 12
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 76 SNPs.
+ dat = 76 SNPs.
+ 76 SNPs in common.
Converting obj to sparseMatrix.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 4 ▶▶▶ Fine-map 🔊 ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Gathering method sources.
Gathering method citations.
Preparing sample size column (N).
Using existing 'N' column.
Gathering method sources.
Gathering method citations.
Gathering method sources.
Gathering method citations.
ABF
🚫 Missing required column(s) for ABF [skipping]: MAF, proportion_cases
FINEMAP
✅ All required columns present.
⚠ Missing optional column(s) for FINEMAP: MAF
SUSIE
✅ All required columns present.
✅ All optional columns present.
POLYFUN_SUSIE
✅ All required columns present.
⚠ Missing optional column(s) for POLYFUN_SUSIE: MAF
++ Fine-mapping using 3 tool(s): FINEMAP, SUSIE, POLYFUN_SUSIE
+++ Multi-finemap:: FINEMAP +++
Preparing sample size column (N).
Using existing 'N' column.
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 76 SNPs.
+ dat = 76 SNPs.
+ 76 SNPs in common.
Converting obj to sparseMatrix.
Constructing master file.
Optional MAF col missing. Replacing with all '.1's
Constructing data.z file.
Constructing data.ld file.
FINEMAP path: /home/rstudio/.cache/R/echofinemap/FINEMAP/finemap_v1.4.1_x86_64/finemap_v1.4.1_x86_64
Inferred FINEMAP version: 1.4.1
Running FINEMAP.
cd .../RP11-240A16.1 &&
.../finemap_v1.4.1_x86_64
--sss
--in-files .../master
--log
--n-threads 20
--n-causal-snps 5
|--------------------------------------|
| Welcome to FINEMAP v1.4.1 |
| |
| (c) 2015-2022 University of Helsinki |
| |
| Help : |
| - ./finemap --help |
| - www.finemap.me |
| - www.christianbenner.com |
| |
| Contact : |
| - finemap@christianbenner.com |
| - matti.pirinen@helsinki.fi |
|--------------------------------------|
--------
SETTINGS
--------
- dataset : all
- corr-config : 0.95
- n-causal-snps : 5
- n-configs-top : 50000
- n-conv-sss : 100
- n-iter : 100000
- n-threads : 20
- prior-k0 : 0
- prior-std : 0.05
- prob-conv-sss-tol : 0.001
- prob-cred-set : 0.95
------------
FINE-MAPPING (1/1)
------------
- GWAS summary stats : FINEMAP/data.z
- SNP correlations : FINEMAP/data.ld
- Causal SNP stats : FINEMAP/data.snp
- Causal configurations : FINEMAP/data.config
- Credible sets : FINEMAP/data.cred
- Log file : FINEMAP/data.log_sss
- Reading input : done!
- Updated prior SD of effect sizes : 0.05 0.0528 0.0558 0.0589
- Number of GWAS samples : 2687
- Number of SNPs : 76
- Prior-Pr(# of causal SNPs is k) :
(0 -> 0)
1 -> 0.584
2 -> 0.292
3 -> 0.096
4 -> 0.0234
5 -> 0.00449
- 1800 configurations evaluated (0.122/100%) : converged after 122 iterations
- Computing causal SNP statistics : done!
- Regional SNP heritability : 0.0276 (SD: 0.00441 ; 95% CI: [0.0196,0.0371])
- Log10-BF of >= one causal SNP : 24.4
- Post-expected # of causal SNPs : 4.74
- Post-Pr(# of causal SNPs is k) :
(0 -> 0)
1 -> 9.4e-21
2 -> 2.73e-11
3 -> 1.41e-07
4 -> 0.265
5 -> 0.735
- Writing output : done!
- Run time : 0 hours, 0 minutes, 0 seconds
2 data.cred* file(s) found in the same subfolder.
Selected file based on postPr_k: data.cred5
Importing conditional probabilities (.cred file).
No configurations were causal at PP>=0.95.
Importing marginal probabilities (.snp file).
Importing configuration probabilities (.config file).
FINEMAP was unable to identify any credible sets at PP>=0.95.
++ Credible Set SNPs identified = 0
++ Merging FINEMAP results with multi-finemap data.
+++ Multi-finemap:: SUSIE +++
Loading required namespace: Rfast
Failed with error: 'there is no package called 'Rfast''
Preparing sample size column (N).
Using existing 'N' column.
+ SUSIE:: sample_size=2,687
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 76 SNPs.
+ dat = 76 SNPs.
+ 76 SNPs in common.
Converting obj to sparseMatrix.
+ SUSIE:: Using `susie_rss()` from susieR v0.12.27
+ SUSIE:: Extracting Credible Sets.
++ Credible Set SNPs identified = 2
++ Merging SUSIE results with multi-finemap data.
+++ Multi-finemap:: POLYFUN_SUSIE +++
PolyFun submodule already installed.
PolyFun:: Fine-mapping with method=SUSIE
PolyFun:: Using priors from mode=precomputed
Unable to find conda binary. Is Anaconda installed?Locus RP11-240A16.1 complete in: 0.33 min
┌─────────────────────────────────────────┐
│ │
│ )))> 🦇 XYLT1 [locus 2 / 3] 🦇 <((( │
│ │
└─────────────────────────────────────────┘
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 1 ▶▶▶ Query 🔎 ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
+ Query Method: tabix
Constructing GRanges query using min/max ranges within a single chromosome.
query_dat is already a GRanges object. Returning directly.
========= echotabix::convert =========
Converting full summary stats file to tabix format for fast querying.
Inferred format: 'table'
Explicit format: 'table'
Inferring comment_char from tabular header: 'SNP'
Determining chrom type from file header.
Chromosome format: 1
Detecting column delimiter.
Identified column separator: \t
Sorting rows by coordinates via bash.
Searching for header row with grep.
( grep ^'SNP' .../QC_SNPs_COLMAP.txt; grep
-v ^'SNP' .../QC_SNPs_COLMAP.txt | sort
-k2,2n
-k3,3n ) > .../file2fb33669f7f_sorted.tsv
Constructing outputs
Using existing bgzipped file: /home/rstudio/echolocatoR/echolocatoR_LID/QC_SNPs_COLMAP.txt.bgz
Set force_new=TRUE to override this.
Tabix-indexing file using: Rsamtools
Data successfully converted to bgzip-compressed, tabix-indexed format.
========= echotabix::query =========
query_dat is already a GRanges object. Returning directly.
Inferred format: 'table'
Querying tabular tabix file using: Rsamtools.
Checking query chromosome style is correct.
Chromosome format: 1
Retrieving data.
Converting query results to data.table.
Processing query: 16:17034975-17054975
Adding 'query' column to results.
Retrieved data with 80 rows
Saving query ==> /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/XYLT1/XYLT1_LID_COX_subset.tsv.gz
+ Query: 80 SNPs x 10 columns.
Standardizing summary statistics subset.
Standardizing main column names.
++ Preparing A1,A1 cols
++ Preparing MAF,Freq cols.
++ Could not infer MAF.
++ Preparing N_cases,N_controls cols.
++ Preparing proportion_cases col.
++ proportion_cases not included in data subset.
Preparing sample size column (N).
Using existing 'N' column.
+ Imputing t-statistic from Effect and StdErr.
+ leadSNP missing. Assigning new one by min p-value.
++ Ensuring Effect,StdErr,P are numeric.
++ Ensuring 1 SNP per row and per genomic coordinate.
++ Removing extra whitespace
+ Standardized query: 80 SNPs x 12 columns.
++ Saving standardized query ==> /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/XYLT1/XYLT1_LID_COX_subset.tsv.gz
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 2 ▶▶▶ Extract Linkage Disequilibrium 🔗 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
LD_reference identified as: 1kg.
Previously computed LD_matrix detected. Importing: /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/XYLT1/LD/XYLT1.1KGphase3_LD.RDS
LD_reference identified as: r.
Converting obj to sparseMatrix.
+ FILTER:: Filtering by LD features.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 3 ▶▶▶ Filter SNPs 🚰 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
FILTER:: Filtering by SNP features.
+ FILTER:: Post-filtered data: 78 x 12
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 78 SNPs.
+ dat = 78 SNPs.
+ 78 SNPs in common.
Converting obj to sparseMatrix.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 4 ▶▶▶ Fine-map 🔊 ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Gathering method sources.
Gathering method citations.
Preparing sample size column (N).
Using existing 'N' column.
Gathering method sources.
Gathering method citations.
Gathering method sources.
Gathering method citations.
ABF
🚫 Missing required column(s) for ABF [skipping]: MAF, proportion_cases
FINEMAP
✅ All required columns present.
⚠ Missing optional column(s) for FINEMAP: MAF
SUSIE
✅ All required columns present.
✅ All optional columns present.
POLYFUN_SUSIE
✅ All required columns present.
⚠ Missing optional column(s) for POLYFUN_SUSIE: MAF
++ Fine-mapping using 3 tool(s): FINEMAP, SUSIE, POLYFUN_SUSIE
+++ Multi-finemap:: FINEMAP +++
Preparing sample size column (N).
Using existing 'N' column.
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 78 SNPs.
+ dat = 78 SNPs.
+ 78 SNPs in common.
Converting obj to sparseMatrix.
Constructing master file.
Optional MAF col missing. Replacing with all '.1's
Constructing data.z file.
Constructing data.ld file.
FINEMAP path: /home/rstudio/.cache/R/echofinemap/FINEMAP/finemap_v1.4.1_x86_64/finemap_v1.4.1_x86_64
Inferred FINEMAP version: 1.4.1
Running FINEMAP.
cd .../XYLT1 &&
.../finemap_v1.4.1_x86_64
--sss
--in-files .../master
--log
--n-threads 20
--n-causal-snps 5
|--------------------------------------|
| Welcome to FINEMAP v1.4.1 |
| |
| (c) 2015-2022 University of Helsinki |
| |
| Help : |
| - ./finemap --help |
| - www.finemap.me |
| - www.christianbenner.com |
| |
| Contact : |
| - finemap@christianbenner.com |
| - matti.pirinen@helsinki.fi |
|--------------------------------------|
--------
SETTINGS
--------
- dataset : all
- corr-config : 0.95
- n-causal-snps : 5
- n-configs-top : 50000
- n-conv-sss : 100
- n-iter : 100000
- n-threads : 20
- prior-k0 : 0
- prior-std : 0.05
- prob-conv-sss-tol : 0.001
- prob-cred-set : 0.95
------------
FINE-MAPPING (1/1)
------------
- GWAS summary stats : FINEMAP/data.z
- SNP correlations : FINEMAP/data.ld
- Causal SNP stats : FINEMAP/data.snp
- Causal configurations : FINEMAP/data.config
- Credible sets : FINEMAP/data.cred
- Log file : FINEMAP/data.log_sss
- Reading input : done!
- Updated prior SD of effect sizes : 0.05 0.0522 0.0545 0.0568
- Number of GWAS samples : 2687
- Number of SNPs : 78
- Prior-Pr(# of causal SNPs is k) :
(0 -> 0)
1 -> 0.584
2 -> 0.292
3 -> 0.0961
4 -> 0.0234
5 -> 0.0045
- 1077 configurations evaluated (0.198/100%) : converged after 198 iterations
- Computing causal SNP statistics : done!
- Regional SNP heritability : 0.0119 (SD: 0.00385 ; 95% CI: [0.00536,0.0204])
- Log10-BF of >= one causal SNP : 4.46
- Post-expected # of causal SNPs : 1.96
- Post-Pr(# of causal SNPs is k) :
(0 -> 0)
1 -> 0.245
2 -> 0.548
3 -> 0.204
4 -> 0.00238
5 -> 0
- Writing output : done!
- Run time : 0 hours, 0 minutes, 0 seconds
3 data.cred* file(s) found in the same subfolder.
Selected file based on postPr_k: data.cred2
Importing conditional probabilities (.cred file).
No configurations were causal at PP>=0.95.
Importing marginal probabilities (.snp file).
Importing configuration probabilities (.config file).
FINEMAP was unable to identify any credible sets at PP>=0.95.
++ Credible Set SNPs identified = 0
++ Merging FINEMAP results with multi-finemap data.
+++ Multi-finemap:: SUSIE +++
Loading required namespace: Rfast
Failed with error: 'there is no package called 'Rfast''
In addition: Warning messages:
1: In SUSIE(dat = dat, dataset_type = dataset_type, LD_matrix = LD_matrix, :
Install Rfast to speed up susieR even further:
install.packages('Rfast')
2: In susie_suff_stat(XtX = XtX, Xty = Xty, n = n, yty = (n - 1) * :
IBSS algorithm did not converge in 100 iterations!
Please check consistency between summary statistics and LD matrix.
See https://stephenslab.github.io/susieR/articles/susierss_diagnostic.html
Preparing sample size column (N).
Using existing 'N' column.
+ SUSIE:: sample_size=2,687
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 78 SNPs.
+ dat = 78 SNPs.
+ 78 SNPs in common.
Converting obj to sparseMatrix.
+ SUSIE:: Using `susie_rss()` from susieR v0.12.27
+ SUSIE:: Extracting Credible Sets.
++ Credible Set SNPs identified = 1
++ Merging SUSIE results with multi-finemap data.
+++ Multi-finemap:: POLYFUN_SUSIE +++
PolyFun submodule already installed.
PolyFun:: Fine-mapping with method=SUSIE
PolyFun:: Using priors from mode=precomputed
Unable to find conda binary. Is Anaconda installed?Locus XYLT1 complete in: 0.32 min
┌────────────────────────────────────────┐
│ │
│ )))> 🦇 LRP8 [locus 3 / 3] 🦇 <((( │
│ │
└────────────────────────────────────────┘
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 1 ▶▶▶ Query 🔎 ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
+ Query Method: tabix
Constructing GRanges query using min/max ranges within a single chromosome.
query_dat is already a GRanges object. Returning directly.
========= echotabix::convert =========
Converting full summary stats file to tabix format for fast querying.
Inferred format: 'table'
Explicit format: 'table'
Inferring comment_char from tabular header: 'SNP'
Determining chrom type from file header.
Chromosome format: 1
Detecting column delimiter.
Identified column separator: \t
Sorting rows by coordinates via bash.
Searching for header row with grep.
( grep ^'SNP' .../QC_SNPs_COLMAP.txt; grep
-v ^'SNP' .../QC_SNPs_COLMAP.txt | sort
-k2,2n
-k3,3n ) > .../file2fb4113b218_sorted.tsv
Constructing outputs
Using existing bgzipped file: /home/rstudio/echolocatoR/echolocatoR_LID/QC_SNPs_COLMAP.txt.bgz
Set force_new=TRUE to override this.
Tabix-indexing file using: Rsamtools
Data successfully converted to bgzip-compressed, tabix-indexed format.
========= echotabix::query =========
query_dat is already a GRanges object. Returning directly.
Inferred format: 'table'
Querying tabular tabix file using: Rsamtools.
Checking query chromosome style is correct.
Chromosome format: 1
Retrieving data.
Converting query results to data.table.
Processing query: 1:53768300-53788300
Adding 'query' column to results.
Retrieved data with 52 rows
Saving query ==> /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/LRP8/LRP8_LID_COX_subset.tsv.gz
+ Query: 52 SNPs x 10 columns.
Standardizing summary statistics subset.
Standardizing main column names.
++ Preparing A1,A1 cols
++ Preparing MAF,Freq cols.
++ Could not infer MAF.
++ Preparing N_cases,N_controls cols.
++ Preparing proportion_cases col.
++ proportion_cases not included in data subset.
Preparing sample size column (N).
Using existing 'N' column.
+ Imputing t-statistic from Effect and StdErr.
+ leadSNP missing. Assigning new one by min p-value.
++ Ensuring Effect,StdErr,P are numeric.
++ Ensuring 1 SNP per row and per genomic coordinate.
++ Removing extra whitespace
+ Standardized query: 52 SNPs x 12 columns.
++ Saving standardized query ==> /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/LRP8/LRP8_LID_COX_subset.tsv.gz
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 2 ▶▶▶ Extract Linkage Disequilibrium 🔗 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
LD_reference identified as: 1kg.
Previously computed LD_matrix detected. Importing: /home/rstudio/echolocatoR/echolocatoR_LID/RESULTS/GWAS/LID_COX/LRP8/LD/LRP8.1KGphase3_LD.RDS
LD_reference identified as: r.
Converting obj to sparseMatrix.
+ FILTER:: Filtering by LD features.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 3 ▶▶▶ Filter SNPs 🚰 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
FILTER:: Filtering by SNP features.
+ FILTER:: Post-filtered data: 51 x 12
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 51 SNPs.
+ dat = 51 SNPs.
+ 51 SNPs in common.
Converting obj to sparseMatrix.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 4 ▶▶▶ Fine-map 🔊 ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Gathering method sources.
Gathering method citations.
Preparing sample size column (N).
Using existing 'N' column.
Gathering method sources.
Gathering method citations.
Gathering method sources.
Gathering method citations.
ABF
🚫 Missing required column(s) for ABF [skipping]: MAF, proportion_cases
FINEMAP
✅ All required columns present.
⚠ Missing optional column(s) for FINEMAP: MAF
SUSIE
✅ All required columns present.
✅ All optional columns present.
POLYFUN_SUSIE
✅ All required columns present.
⚠ Missing optional column(s) for POLYFUN_SUSIE: MAF
++ Fine-mapping using 3 tool(s): FINEMAP, SUSIE, POLYFUN_SUSIE
+++ Multi-finemap:: FINEMAP +++
Preparing sample size column (N).
Using existing 'N' column.
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 51 SNPs.
+ dat = 51 SNPs.
+ 51 SNPs in common.
Converting obj to sparseMatrix.
Constructing master file.
Optional MAF col missing. Replacing with all '.1's
Constructing data.z file.
Constructing data.ld file.
FINEMAP path: /home/rstudio/.cache/R/echofinemap/FINEMAP/finemap_v1.4.1_x86_64/finemap_v1.4.1_x86_64
Inferred FINEMAP version: 1.4.1
Running FINEMAP.
cd .../LRP8 &&
.../finemap_v1.4.1_x86_64
--sss
--in-files .../master
--log
--n-threads 20
--n-causal-snps 5
|--------------------------------------|
| Welcome to FINEMAP v1.4.1 |
| |
| (c) 2015-2022 University of Helsinki |
| |
| Help : |
| - ./finemap --help |
| - www.finemap.me |
| - www.christianbenner.com |
| |
| Contact : |
| - finemap@christianbenner.com |
| - matti.pirinen@helsinki.fi |
|--------------------------------------|
--------
SETTINGS
--------
- dataset : all
- corr-config : 0.95
- n-causal-snps : 5
- n-configs-top : 50000
- n-conv-sss : 100
- n-iter : 100000
- n-threads : 20
- prior-k0 : 0
- prior-std : 0.05
- prob-conv-sss-tol : 0.001
- prob-cred-set : 0.95
------------
FINE-MAPPING (1/1)
------------
- GWAS summary stats : FINEMAP/data.z
- SNP correlations : FINEMAP/data.ld
- Causal SNP stats : FINEMAP/data.snp
- Causal configurations : FINEMAP/data.config
- Credible sets : FINEMAP/data.cred
- Log file : FINEMAP/data.log_sss
- Reading input : done!
- Updated prior SD of effect sizes : 0.05 0.0517 0.0535 0.0554
- Number of GWAS samples : 2687
- Number of SNPs : 51
- Prior-Pr(# of causal SNPs is k) :
(0 -> 0)
1 -> 0.585
2 -> 0.292
3 -> 0.0955
4 -> 0.0229
5 -> 0.00431
- 1081 configurations evaluated (0.123/100%) : converged after 123 iterations
- Computing causal SNP statistics : done!
- Regional SNP heritability : 0.0259 (SD: 0.00368 ; 95% CI: [0.0188,0.0334])
- Log10-BF of >= one causal SNP : 24.9
- Post-expected # of causal SNPs : 5
- Post-Pr(# of causal SNPs is k) :
(0 -> 0)
1 -> 5.84e-22
2 -> 1.71e-17
3 -> 1.74e-11
4 -> 4.56e-06
5 -> 1
- Writing output : done!
- Run time : 0 hours, 0 minutes, 0 seconds
1 data.cred* file(s) found in the same subfolder.
Selected file based on postPr_k: data.cred5
Importing conditional probabilities (.cred file).
No configurations were causal at PP>=0.95.
Importing marginal probabilities (.snp file).
Importing configuration probabilities (.config file).
FINEMAP was unable to identify any credible sets at PP>=0.95.
++ Credible Set SNPs identified = 0
++ Merging FINEMAP results with multi-finemap data.
+++ Multi-finemap:: SUSIE +++
Loading required namespace: Rfast
Failed with error: 'there is no package called 'Rfast''
In addition: Warning message:
In SUSIE(dat = dat, dataset_type = dataset_type, LD_matrix = LD_matrix, :
Install Rfast to speed up susieR even further:
install.packages('Rfast')
Preparing sample size column (N).
Using existing 'N' column.
+ SUSIE:: sample_size=2,687
+ Subsetting LD matrix and dat to common SNPs...
Removing unnamed rows/cols
Replacing NAs with 0
+ LD_matrix = 51 SNPs.
+ dat = 51 SNPs.
+ 51 SNPs in common.
Converting obj to sparseMatrix.
+ SUSIE:: Using `susie_rss()` from susieR v0.12.27
+ SUSIE:: Extracting Credible Sets.
++ Credible Set SNPs identified = 3
++ Merging SUSIE results with multi-finemap data.
+++ Multi-finemap:: POLYFUN_SUSIE +++
PolyFun submodule already installed.
PolyFun:: Fine-mapping with method=SUSIE
PolyFun:: Using priors from mode=precomputed
Unable to find conda binary. Is Anaconda installed?Locus LRP8 complete in: 0.33 min
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
── Step 6 ▶▶▶ Postprocess data 🎁 ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Returning results as nested list.
All loci done in: 0.97 min
$`RP11-240A16.1`
NULL
$XYLT1
NULL
$LRP8
NULL
$merged_dat
Null data.table (0 rows and 0 cols)
Warning message:
In SUSIE(dat = dat, dataset_type = dataset_type, LD_matrix = LD_matrix, :
Install Rfast to speed up susieR even further:
install.packages('Rfast')
Ok actually I see how this kind of suggests that you can supply an integer vector as a last resort. The issue is that message
tries to print every single item in the vector. This isn't an issue with a small number of SNPs like in our MSS unit tests, but does become an issue when printing millions.
Posted this here and will get it fixed: https://github.com/neurogenomics/MungeSumstats/issues/125
I just got this running Note that: ABF does not get to if proportion_cases is missinng polyfun_susie is missing the environment.
Cool, that's progress.
Can you create a separate Issues for these to avoid mixing issues? (makes it easier to find solutions later)
Ok, did you merge with master?
[cid:0684037b-048e-4b15-ad56-a64551bb9956]
From: Brian M. Schilder @.> Sent: 20 September 2022 14:22 To: RajLabMSSM/echolocatoR @.> Cc: Martinez Carrasco, Alejandro @.>; Author @.> Subject: Re: [RajLabMSSM/echolocatoR] Problem with "N" using munged data (Issue #114)
⚠ Caution: External sender
This was a bug due to me missing some places where the sample_size was still being used. I just pushed a change to contruct_colmap so that is takes the arg N= instead of sample_size. N acts just like the other mapping columns in that it renames a column based on the value supplied to the argument (construct_colmap(N="N_cases") means that the "N_cases" column will get renamed to "N" and used in any subsequent steps that require total (or effective) sample size.
Note that whenever the "N" col is present in the post-standardized sumstats data, this will be used instead of whatever you supply to compute_n. I've updated the docs to better explain this.
1.b. compute_n = data_munged$NSTUDY (This does not work at all)
This is my bad; I told you yesterday that MungeSumstats::compute_nsize(compute_n=) can take a vector of sample size. But apparently I misremembered and it can only take a single number to be applied to all rows. Instead, I'll add a line to echodata::get_sample_size that handle these scenarios.
Rfast also wasn't installed. I've now ensured that it get automatically installed by making it an Import of echofinemap.
— Reply to this email directly, view it on GitHubhttps://github.com/RajLabMSSM/echolocatoR/issues/114#issuecomment-1252348223, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AUDOQD5A33DGKBPQ6NWLR33V7G3CHANCNFSM6AAAAAAQQ5XKMA. You are receiving this because you authored the thread.Message ID: @.***>
1. Bug description
This is the continuation of a bug description in #113. I tried to pass to finemap_loci, the munged data, and the issue with sample size keeps arising
When I pass an integer to compute_n, it sort of works assigning the same N to all SNPs. However, as you will see below, NSTUDY exists.......
2. Reproducible example
Note Reading compute_n in mungerSumStats doc, I tried....
Code
Console output
Data
Input data stored on tsv.gz file
3. Session info
(Add output of the R function
utils::sessionInfo()
below. This helps us assess version/OS conflicts which could be causing bugs.)