Closed olavurmortensen closed 5 years ago
This seems to become a problem when the number of variants is large. The example above is with 13 843 variants, but another example with 1419 variants works fine.
Can you run with the option "--outvcf 0" to see if the error still occurs. This will confirm that the error is in the code (recently added) that outputs the phased VCF.
@vibansal I ran HapCUT2
with --outvcf 0
and got the error below instead, but no *** buffer overflow detected ***
error. Looks like it isn't occurring when writing the VCF.
[2019:05:29 11:46:47] fragment file: linked_fragments
[2019:05:29 11:46:47] variantfile (VCF format):nocalls_removed.vcf
[2019:05:29 11:46:47] haplotypes will be output to file: haplotypes
[2019:05:29 11:46:47] solution convergence cutoff: 5
[2019:05:29 11:46:47] QVoffset: 33
[2019:05:29 11:46:47] Calling Max-Likelihood-Cut based haplotype assembly algorithm
[2019:05:29 11:46:47] read 13839 variants from nocalls_removed.vcf file
[2019:05:29 11:46:47] no of non-trivial connected components 428 max-Degree 1024 connected variants 4088 coverage-per-variant 14.089041
[2019:05:29 11:46:47] fragments 19672 snps 13839 component(blocks) 428
[2019:05:29 11:46:47] processed fragment file and variant file: fragments 19672 variants 13839
[2019:05:29 11:46:51] OUTPUTTING PRUNED HAPLOTYPE ASSEMBLY TO FILE haplotypes
Would it be possible for you to share the fragment and VCF files?
Unfortunately no, this is confidential data.
In that case, can you run HapCUT2 using the debugger gdb and share the output?
I can try, I have used gdb before. Can you tell me how, briefly?
gdb -ex=r --args path_HAPCUT2_binary --fragments frags.txt --VCF input.vcf --out out.haps
If the program exits abnormally, run 'backtrace' on the (gdb) prompt and get the output.
You can share the output via email (vibansal at ucsd.edu) if needed.
Thanks. I've sent you the output. HapCUT2 executed in the same way as I wrote above, and I issued the backtrace
command as you said.
The problem was in the code that outputs the phased VCF file. I have pushed an update to the 'master' branch. Let me know if it works or not.
That fixed it, thanks a bunch!
I've included two logs, one with the
*** buffer overflow detected ***
message, and just the log from HapCUT2.I was using HapCUT2 with hash:
fd01a1d2794c990d986a720a13905b167eb943d7
Don't know what else I can share to help debug, please let me know if there is something.
HapCUT2 log:
Error message: