Hi! Zeyu
thank you for developing such a good merging tool
I currently have 6 genomes and 100 short-reads resequencing samples. I intend to perform SV detection for 100 individuals using the following steps:
Step 1: Construct a pan-genome graph
Command:perl ../../scripts/build_graph.pl -b ./Ref.fa -o ./out -t 30 ./ge1.fa ./ge2.fa ./ge3.fa ./ge4.fa ./ge5.fa
Result:03.final.gfa is generated as Ref.gfa
vg [warning]: System's vm.overcommit_memory setting is 2 (never overcommit). vg does not work well under these conditions; you may appear to run out of memory with plenty of memory left. Attempting to unsafely reconfigure jemalloc to deal better with this situation.
[IndexRegistry]: Checking for haplotype lines in GFA.
[vg autoindex] Executing command: vg autoindex --tmp-dir tmp --workflow giraffe --gfa Ref.gfa --prefix Ref -t 50 -R XG -R VG
[IndexRegistry]: Constructing VG graph from GFA input.
[IndexRegistry]: Constructing XG graph from VG graph.
[IndexRegistry]: Constructing a greedy path cover GBWT
[IndexRegistry]: Constructing GBZ using NamedNodeBackTranslation.
[IndexRegistry]: Finding snarls in graph.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
My server has 200GB of RAM, so it shouldn't be a memory issue. Have you also encountered this problem? Looking forward to your reply.
Thanks in advance
Hi! Zeyu thank you for developing such a good merging tool I currently have 6 genomes and 100 short-reads resequencing samples. I intend to perform SV detection for 100 individuals using the following steps:
Step 1: Construct a pan-genome graph Command:
perl ../../scripts/build_graph.pl -b ./Ref.fa -o ./out -t 30 ./ge1.fa ./ge2.fa ./ge3.fa ./ge4.fa ./ge5.fa
Result:03.final.gfa
is generated asRef.gfa
Step 2: Run Snakefile_NGS Command: snakemake -j 60 --reason --printshellcmds -s Snakefile_NGS Error:
My server has 200GB of RAM, so it shouldn't be a memory issue. Have you also encountered this problem? Looking forward to your reply. Thanks in advance