Open Nhasbani opened 3 years ago
I have the same problem. Is it solved???
I am having the same issue. Any solution?
Try to use --chunksize flag. If your data is big, it may cause this error. --chunksize 500000 I hope this helps.
--chunksize 500000 does not work
Selecting the columns I need and deleting extra columns works for me
sad ,this doesn't work for me too,hope someone could help me out
In my experience, this problem was solved after removing a all "NA" column from the input file. Hope it helps.
Switching my SNP id column from chrom:pos:ref:alt ids to rsIDs worked for me
I am having the same issue. Any solution?
I am trying to to cross-trait LDSC for a multivariate analysis. In trying to use the munge_sumstats.py to format the statistics, I am getting this error on multiple files.
Any suggestions?
Call: ./munge_sumstats.py \ --signed-sumstats Effect,0 \ --out imt_formatted \ --merge-alleles w_hm3.snplist \ --N-col N \ --chunksize 50000 \ --snp MarkerName \ --sumstats /home/nhasbani/projects/multipheno_cad/CIMT_PLAQ_CAC/IMT.meta_CHARGE_UKB1.TBL
Interpreting column names as follows: MarkerName: Variant ID (e.g., rs number) P-value: p-Value Allele2: Allele 2, interpreted as non-ref allele for signed sumstat. Allele1: Allele 1, interpreted as ref allele for signed sumstat. Effect: Directional summary statistic as specified by --signed-sumstats. N: Sample size
Reading list of SNPs for allele merge from w_hm3.snplist Read 1217311 SNPs for allele merge. Reading sumstats from /home/nhasbani/projects/multipheno_cad/CIMT_PLAQ_CAC/IMT.meta_CHARGE_UKB1.TBL into memory 50000 SNPs at a time. ........................................................................................................................................................................................... done
ERROR converting summary statistics:
Traceback (most recent call last): File "./munge_sumstats.py", line 686, in munge_sumstats dat = parse_dat(dat_gen, cname_translation, merge_alleles, log, args) File "./munge_sumstats.py", line 301, in parse_dat dat = pd.concat(dat_list, axis=0).reset_index(drop=True) File "/home/nhasbani/anaconda3/envs/ldsc/lib/python2.7/site-packages/pandas/core/reshape/concat.py", line 206, in concat copy=copy) File "/home/nhasbani/anaconda3/envs/ldsc/lib/python2.7/site-packages/pandas/core/reshape/concat.py", line 239, in init raise ValueError('No objects to concatenate') ValueError: No objects to concatenate
Conversion finished at Wed Nov 3 14:09:45 2021 Total time elapsed: 37.59s Traceback (most recent call last): File "./munge_sumstats.py", line 745, in
munge_sumstats(parser.parse_args(), p=True)
File "./munge_sumstats.py", line 686, in munge_sumstats
dat = parse_dat(dat_gen, cname_translation, merge_alleles, log, args)
File "./munge_sumstats.py", line 301, in parse_dat
dat = pd.concat(dat_list, axis=0).reset_index(drop=True)
File "/home/nhasbani/anaconda3/envs/ldsc/lib/python2.7/site-packages/pandas/core/reshape/concat.py", line 206, in concat
copy=copy)
File "/home/nhasbani/anaconda3/envs/ldsc/lib/python2.7/site-packages/pandas/core/reshape/concat.py", line 239, in init
raise ValueError('No objects to concatenate')
ValueError: No objects to concatenate
(ldsc) nhasbani@HGCNT92:~/projects/multipheno_cad/ldsc>