Nextomics / NextDenovo

Fast and accurate de novo assembler for long reads
GNU General Public License v3.0
350 stars 52 forks source link

nextgraph segmentation fault #113

Open ucassee opened 3 years ago

ucassee commented 3 years ago

Hi developer,

Describe the bug

I meet an error with 03.ctg_graph/01.ctg_graph.sh.work/ctg_graph0/nextDenovo.sh I attached the log files nextDenovo.sh.e.txt pid161556.log.info.txt

Look forward to your reply. Thanks.

Genome characteristics genome size:2.1G

Input data pacbio

Operating system Which operating system and version are you using? PBS

GCC What version of GCC are you using? gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)

Python What version of Python are you using? Python 2.7.16

NextDenovo What version of NextDenovo are you using? nextDenovo v2.4.0

moold commented 3 years ago

Hi, first follow #101 to check ovl file. If there is no error, I maybe need the input (all ovl, ovl.bl files) of nextgraph to reproduce this bug, without these files, it is almost impossible to fix this error.

ucassee commented 3 years ago

Hi @moold , I attached the log file of ls cns_align*/cns.filt.dovt.ovl|while read line;do echo $line;/data/software/NextDenovo/bin/ovl_cvt -m 1 $line|head -5;done > check.log check.log

moold commented 3 years ago

Hi, it seems everything is OK, but this is only head 5 lines. you can rerun nextgraph, while the input file 01.ctg_graph.input.ovls contains only one ovl file, and you can run it for each ovl file to check which one causes this error.

ucassee commented 3 years ago

Hi @moold , I run nextgraph with each cns_align*/cns.filt.dovt.ovl file. I find `cns_align(02/03/04/08)/cns.filt.dovt.ovl' with Segmentation fault error.

moold commented 3 years ago

So you need to rerun cns_align(02/03/04/08) subtasks, to reproduce these files. you can run one by one to avoid unknown errors.

ucassee commented 3 years ago

Hi @moold The running time of each subtask is about 50 days. Can I rerun them in parallel or modify some settings to accelerate? The command in one subtask is like: time /data/software/NextDenovo/bin/minimap2-nd -I 20G --step 2 --dual=yes -t 28 -x ava-pb -k 17 -w 17 --minlen 2000 --maxhan1 5000 /data/Project/01.assmbly/01_rundir/02.cns_align/01.seed_cns.sh.work/seed_cns0/cns.fasta /data/Project/01.assmbly/01_rundir/02.cns_align/01.seed_cns.sh.work/seed_cns2/cns.fasta -o cns.filt.dovt.ovl

moold commented 3 years ago

I am not sure, an error in ovl file usually caused by not enough RAM. Note, the required RAM is dynamic, so if your RAM is enough, you can run it in parallel. Our test shown the peak RAM of each these subtasks is aboult 32~120G, depending on the max read length and the value of option -I and -t.

ucassee commented 3 years ago

I use the cluster to run it. Each node has 256G RAM and 28 cores. So I think RAM is not a problem.

moold commented 3 years ago

No, may be 256G RAM is not enough, because you have different -i -t and max read length.

ucassee commented 3 years ago

I run each subtask in the different nodes. If the RAM is not enough should I set -I smaller?

moold commented 3 years ago

YES,as well as -t, maybe.

ucassee commented 3 years ago

Will the smaller -I and -t significantly increase the time-consuming?

DaniPaulo commented 2 months ago

Was this solved? I'm having a similar issue:

[INFO] 2024-04-30 18:24:00 Initialize graph and reading... /90daydata/tephritid_gss/dani/nextDenovo/SMB_Q12L5K_nextDenovo/03.ctg_graph/01.ctg_graph.sh.work/ctg_graph1/nextDenovo.sh: line 5: 267342 Segmentation fault
(core dumped) /project/tephritid_gss/daniel.paulo/envs/nextDenovo/lib/python3.10/site-packages/nextdenovo/bin/nextgraph -a 1 -f /90daydata/tephritid_gss/d ani/nextDenovo/SMB_Q12L5K_nextDenovo/.//03.ctg_graph/01.ctg_graph.input.seqs /90daydata/tephritid_gss/dani/nextDenovo/SMB_Q12L5K_nextDenovo/.//03.ctg_graph/01 .ctg_graph.input.ovls -o nd.asm.p.fasta

my run.cfg looks like this:

[General] job_type = local job_prefix = nextDenovo task = all rewrite = yes deltmp = yes parallel_jobs = 1 input_type = raw input_fofn = input.fofn read_type = ont workdir = ./

[correct_option] genome_size = 640m seed_depth = 31 pa_correction = 6 minimap2_options_raw = -t 48 sort_options = -m 20g -t 48 correction_options = -p 48 --blacklist

[assemble_option] minimap2_options_cns = -t 48 nextgraph_options = -a 1

and I' running in a high performance computing (HPC), 48 CPUs, 1 Node, 1 Task, max 372Gb RAM.