Open Gerlex89 opened 2 years ago
Hi, could you paste the content of path/to/NextPolish.backup0/00.score_chain/01.db_split.sh.work/db_split1/nextPolish.sh.e
to here?
Hi,
path/to/NextPolish.backup0/00.score_chain/01.db_split.sh.work/db_split1/nextPolish.sh.e
:
hostname
+ hostname
cd path/to/NextPolish/NextPolish.backup0/00.score_chain/01.db_split.sh.work/db_split1
+ cd path/to/NextPolish/NextPolish.backup0/00.score_chain/01.db_split.sh.work/db_split1
time path/to/NextPolish/NextPolish.backup0/bin/seq_split -d path/to/NextPolish/NextPolish.backup0 -m 315166.6666666667 -n 6 -t 5 -i 1 -s 1891000 -p input.sgspart path/to/NextPolish/NextPolish.backup0/test.fofn
+ time path/to/NextPolish/NextPolish.backup0/bin/seq_split -d path/to/NextPolish/NextPolish.backup0 -m 315166.6666666667 -n 6 -t 5 -i 1 -s 1891000 -p input.sgspart path/to/NextPolish/NextPolish.backup0/test.fofn
time: cannot run path/to/NextPolish/NextPolish.backup0/bin/seq_split: No such file or directory
Command exited with non-zero status 127
0.00user 0.00system 0:00.00elapsed ?%CPU (0avgtext+0avgdata 1020maxresident)k
0inputs+0outputs (0major+25minor)pagefaults 0swaps
As the log says, there is no seq_split
execution file, so follow here to reinstall.
BTW, do not forget make
command after downloading.
I made the reinstall, but now I receive the log below from the nextPolish.sh.e
file from running the test data. However, when I ran a clean installation of it in a server, then it's successful. Looks like is clear that the problem points to my local Python or Anaconda installation, but I would be glad if you have an idea what could be causing this issue. By now it's possible to close this question.
Thanks!
hostname
+ hostname
cd path/to/NextPolish/test_data/01_rundir/00.lgs_polish/04.polish.ref.sh.work/polish_genome1
+ cd path/to/NextPolish/test_data/01_rundir/00.lgs_polish/04.polish.ref.sh.work/polish_genome1
time /path/to/anaconda3/bin/python path/to/NextPolish/lib/nextpolish2.py -sp -p 1 -g path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/input.genome.fasta -b path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/input.genome.fasta.blc -i 0 -l path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/lgs.sort.bam.list -r ont -o genome.nextpolish.part000.fasta
+ time /path/to/anaconda3/bin/python path/to/NextPolish/lib/nextpolish2.py -sp -p 1 -g path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/input.genome.fasta -b path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/input.genome.fasta.blc -i 0 -l path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/lgs.sort.bam.list -r ont -o genome.nextpolish.part000.fasta
[110589 INFO] 2021-12-07 11:22:42 Corrected step options:
[110589 INFO] 2021-12-07 11:22:42
split: 0
process: 1
auto: True
read_type: 1
block_index: 0
window: 5000000
uppercase: False
alignment_score_ratio: 0.8
alignment_identity_ratio: 0.8
out: genome.nextpolish.part000.fasta
genome: path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/input.genome.fasta
bam_list: path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/lgs.sort.bam.list
block: path/to/NextPolish/test_data/./01_rundir/00.lgs_polish/input.genome.fasta.blc
[110589 WARNING] 2021-12-07 11:22:42 Adjust -p from 1 to 0, -w from 5000000 to 5000000, logical CPUs:4, available RAM:~6G, use -a to disable automatic adjustment.
Traceback (most recent call last):
File "path/to/NextPolish/lib/nextpolish2.py", line 260, in <module>
main(args)
File "path/to/NextPolish/lib/nextpolish2.py", line 192, in main
pool = Pool(args.process, initializer=start)
File "/path/to/anaconda3/lib/python3.9/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/path/to/anaconda3/lib/python3.9/multiprocessing/pool.py", line 205, in __init__
raise ValueError("Number of processes must be at least 1")
ValueError: Number of processes must be at least 1
Command exited with non-zero status 1
0.08user 0.00system 0:00.09elapsed 100%CPU (0avgtext+0avgdata 16528maxresident)k
0inputs+8outputs (0major+2520minor)pagefaults 0swaps
The RAM is too small
Is there a proper way to increase it?
According to the FAQ of NextPolish it should be possible to increase it from the default 3G for Paralleltask, but I cannot see an effect in changing it in the cluster.cfg
file or where precisely use this of the submit
parameter. I still receive the same error.
Also how is that the memory isn't enough for only 3G?
The computer node you submitted only have ~6Gb memory, you can not change it by adjusting parameters, you need to change another computer nodes to run it.
EDITED:
Maybe you just forgot to change job_type = local
to job_type = sge
or others, if you want to submit your job on a computer cluster.
Good to know. At least it's not a problem with my installation or files submitted.
These errors come from a local test. When the tool is used in a cluster it runs perfectly.
I was trying to figure out why there was such different behavior, but if it's due to a hardware restriction then there's nothing to do by now.
Thanks.
Hi all. I'm trying to polish an assembly from Flye for nanopore reads. However, I cannot make the command work as I receive errors that I cannot determine the source, or how to proceed. These errors appear also with the test data (
nextPolish test_data/run.cfg
).Operating system Linux Mint 20.2 Uma (Ubuntu Focal base)
GCC gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
Python Python 3.9.7
NextPolish nextPolish v1.4.0
Input files (https://nextpolish.readthedocs.io/en/latest/TUTORIAL.html#polishing-using-long-reads-only)
lgs.fofn: I have only one FASTQ file because I'm testing the tool to adapt it to CWL, proceeding with
ls /path/to/fastq_runid.fastq > lgs.fofn
run.cfg: modified parts are commented
Log
Regards, Alex