Closed andreas-wilm closed 8 years ago
Hmm interesting. I cannot reproduce the bug. My output: graphmap -I -r 99_otus.fasta [Index 18:06:09] Running in fast and sensitive mode. Two indexes will be used (double memory consumption). [Index 18:06:09] Generating index. [Index 18:07:35] Generating secondary index. [Index 18:09:00] Index generated in 170.48 sec. [Index 18:09:00] Memory consumption: [currentRSS = 10764 MB, peakRSS = 10897 MB]
[Index 18:09:00] Finished generating index. Note: only index was generated due to selected program arguments.
Do you have the latest version pulled and compiled?
My fault! Worked on the cluster and had only asked for 8GB. Sorry
Then again, if these are out of memory problems they should be reported as such and not just result in a segfault... :)
Agreed. And they are handled almost everywhere. I guess you found one of the rare places where I failed to check the allocation output :D Will re-check.
I added some missing checks for memory allocation when generating index. Would you mind re-running the original test you made, to verify it worked?
Fixed in 3c64651:
[Fri, 21 Aug 15 02:03:30 +0000 FATAL] #1: Memory assertion failure. Possible cause - not enough memory or memory not allocated. When allocating all_kmers_. Requested size: 4612041688 bytes.
[Fri, 21 Aug 15 02:03:30 +0000 FATAL] Exiting.
Thanks!
The reference file of interest is gg_13_5_otus/rep_set/99_otus.fasta, which comes with ftp://greengenes.microbio.me/greengenes_release/gg_13_5/gg_13_8_otus.tar.gz It might be considered unusual in so far as it only contains short sequences (16S rRNA; shortest 1254 bp, longest 2368 bp) and all sequence ids are numeric (but unique)
Here's how to reproduce the segfault:
Here a backtrace:
This happens with release v0.21 and also commit 95b9dca