BGI-Qingdao / TGS-GapCloser

A gap-closing software tool that uses long reads to enhance genome assembly.
GNU General Public License v3.0
179 stars 13 forks source link

tgsgapcloser got Killed in step2.2 TGSGapCandidate #65

Closed JhinAir closed 1 year ago

JhinAir commented 1 year ago

Hi Lidong,

The run got killed at step2.2 TGSGapCandidate. /share/home/zhou3lab/zhangxinpei/bioapp/miniconda3/envs/gapless/bin/tgsgapcloser:`` line 443: 37971 Killed $Candidate --ont_reads_a $TGS_READS --contig2ont_paf $OUT_PREFIX.sub.paf --min_nread $MIN_NREAD --max_nread $MAX_NREAD --candidate_max $MAX_CANDIDATE --candidate_shake_filter --candidate_merge < $TMP_INPUT_SCAFF_INFO > $OUT_PREFIX.ont.fasta 2> $OUT_PREFIX.cand.log I came across with the huge memory issue for step2.1 minimap2 seveal times, and this step run through successfully with over 50Gb-size paf file after I set 500Gb mem. But it got killed for the same reason for TGSGapCandidate. slurmstepd: error: Detected 1 oom-kill event(s) in step 1401581.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler Does this step also require huge memory? I attached the cand.log. Could you please help with this? Thank you! Best, Jing tgsgapcloser.cand.log pipe.log

cchd0001 commented 1 year ago

Hi Jing,

Yes, this is an out-of-memory issue.

The best solution reducing the depth of your input reads. However, increasing the --min_idy and --min_match can reduce the memory of this step without re-running previous steps.

Best wishes Lidong Guo

JhinAir commented 1 year ago

I see. I will try these two para and downsample input reads. Will update. Thank you!

JhinAir commented 1 year ago

Hi, it works. Thank you for your suggestion! Best

cyycyj commented 1 year ago

Hi Jing,

Yes, this is an out-of-memory issue.

The best solution reducing the depth of your input reads. However, increasing the --min_idy and --min_match can reduce the memory of this step without re-running previous steps.

Best wishes Lidong Guo

Dear Developer,

my reads is about 180X in depth, can i seperate the reads to mutiple parts to do polish? I have encounter several times of out-of-memory issue

thanks a lot