Open dakami opened 6 years ago
I had the same issue with too many snap processes that exhausted system memory. I managed to limit the number of snap processes spawned by using a higher -l, so that more lines of the FASTQ file will be put into one trimmed file, and hence fewer trimmed files created, fewer snap processes spawned. (Note, that it's really lines not sequences and hence should be a multiple of 4.) This way it ran through without exhausting memory and no error messages, however I still ended up with no usable results, all 0 stats.
I have a server with 256GB of RAM, and two ~127GB FASTQ files. It seems that once you start creating Trimed files, they're entirely in-core, and eventually exhaust system memory.
There's also sort of an unbound limit to the number of snap processes you'll spawn, and snap (even with this -map flag) will devour 64GB+ per hg38 index load.
I can of course get access to larger servers, but you might want to issue a warning of some kind, or implement out of core resource access.
Thanks for your work here!