Open jpummil opened 4 years ago
I found a similar error message but it happened in the "correct" step. Here is all the output message:
Smartmatch is experimental at ~/Tools/NECAT/Linux-amd64/bin/Plgd/Project.pm line 263.
2021-01-07 10:50:00 [Info] Start correcting rawreads.
2021-01-07 10:50:00 [Info] Start filtering reads for consensus.
2021-01-07 10:50:00 [Info] Run script: ~/NECAT_correct/Lung01B/scripts/cns_pprr.sh 2>&1 |tee ~/NECAT_correct/Lung01B/scripts/cns_pprr.sh.log
~/Tools/NECAT/Linux-amd64/bin/fsa_rd_tools: /lib64/libstdc++.so.6: version GLIBCXX_3.4.20' not found (required by ~/Tools/NECAT/Linux-amd64/bin/fsa_rd_tools) ~/Tools/NECAT/Linux-amd64/bin/fsa_rd_tools: /lib64/libstdc++.so.6: version
GLIBCXX_3.4.21' not found (required by ~/Tools/NECAT/Linux-amd64/bin/fsa_rd_tools)
2021-01-07 10:50:00 [Warning] Failed to run script, 1, ~/NECAT_correct/Lung01B/scripts/cns_pprr.sh
2021-01-07 10:50:00 [Error] Reached to maximum number of script errors
System information: CentOS Linux release 7.8.2003 (Core) perl (v5.26.2) gcc version 4.8.5
I install NECAT from executable binaries. Node has enough memory (3TB) and sufficient disk space.
when I run "necat.pl config water_scs_config.txt" , the error was presented as Smartmatch is experimental at /data/liqingmei/tools/NECAT/Linux-amd64/bin/Plgd/Project.pm line 263. I don't know how to solve this problem, if any one will give some suggestion?
+1 This is similar to the Segfault issue earlier: https://github.com/xiaochuanle/NECAT/issues/16
Did you figure out a solution @jpummil @Goatofmountain ?
Hi, I have this same error in the correction step and also when trying to run a De Novo assembly with Necat. I see that you guys had this problem more than two years ago, has anyone been able to solve it? @jpummil @Goatofmountain
Any information would be very helpful for me!
Hi @maricorozo
I solved this by switching to the assembler Flye for Nanopore data, with polishing. It is a maintained and actively managed project with a responsive team. NECAT doesn't seem to be maintained since they published it.
Cheers, Chris
I have solved this problem. NECAT run on GCC >4.8.5 and Perl >v5.26.2, if the version below this, need to upgrade, or give the library of higher version. Firstly, cp a libstdc++.so.6.20 or libstdc++.so.6.26 file to /usr/lib64/, then 'rm /usr/lib64/libstdc++.so.6' and 'ln /usr/lib64/libstdc++.so.6.0.26 /usr/lib64/libstdc++.so.6'.
Using just a few threads avoids this error. I use 12 threads. It works, and still finishes in a reasonable amount of time for small genomes (<= 100 Mbp). I also tried the following: Installation says "Ubuntu 16.04 (GCC 5.4.0, Perl v5.22.1)" so I installed gcc-5 and g++-5 on an Ubuntu 20.04 machine (for which you need to add 'deb http://us.archive.ubuntu.com/ubuntu/ xenial main' and 'deb http://us.archive.ubuntu.com/ubuntu/ xenial universe' to /etc/apt/sources.list), cloned the git repository, and in src ran 'make CC=/usr/bin/gcc-5 CXX=/usr/bin/g++-5'. Then I used that necat installation to run some assemblies using 92 threads. Tried a small genome (~77 Mbp) and I no longer get the buffer overflow and maximum number of scripts error, but when I try a large genome (> 2 Gbp) I still get the error... Will try fewer threads... Or maybe it's a RAM issue?
Was making good progress, but in what appears to be the "assembly" portion of the code, the following message:
2020-10-08 10:47:35 [Warning] Failed to run script, 139, /local_scratch/211547/horridus/scripts/tr_al_vol_0.sh 2020-10-08 10:47:35 [Error] Reached to maximum number of script errors
Looking at the log for that section of the steps:
[Thu Oct 8 10:47:12 2020] INFO: mapping 800 --- 85/local_scratch/211547/horridus/scripts/tr_al_vol_0.sh: line 16: 86284 Segmentation fault (core dumped) /scrfs/storage/jpummil/home/.conda/envs/NECAT/share/necat-0.0.1_update20200803-0/bin/oc2asmpm -n 100 -z 10 -b 2000 -e 0.5 -j 1 -u 0 -a 400 -u 1 -t 48 /local_scratch/211547/horridus/2-trim_bases/pac_in 0 /local_scratch/211547/horridus/2-trim_bases/pac_in/pm_result_0
Node has ample memory (2TB) and should also have sufficient disk space, thus I don't expect that to be an issue.