jtamames / SqueezeMeta

A complete pipeline for metagenomic analysis
GNU General Public License v3.0
365 stars 78 forks source link

Error at assembly with canu #612

Closed pedres closed 1 year ago

pedres commented 1 year ago

Hi,

I am running an analysis with three large fastq files from MinION and the program gives an error at the first step. Reading the report from the server it said that there is a problem in line 46 from assembly_canu.pl. I think that this line checks the command for running canu, but I cannot see where the error is.

slurm-1721713.txt test4.txt

syslog.txt

Thanks a lot for your help

Manuel

jtamames commented 1 year ago

Hello Manuel Try this: edit the script 01.run_all_assemblies.pl in the scripts directory. Change line 342 to: if($p2name) { $par2name="$datapath/raw_fastq/$p2name"; } And restart or run again Let me know if it works

pedres commented 1 year ago

It did not work. Apparently it gives the same error at line 46. I am using SqueezeMeta v1.6.0

syslog.txt slurm-1741959.txt

jtamames commented 1 year ago

Hello Canu log complains about failed to submit jobs:

-- Failed to submit compute jobs.  Delay 10 seconds and try again.

CRASH:
CRASH: canu 2.2
CRASH: Please panic, this is abnormal.
CRASH:
CRASH:   Failed to submit compute jobs.
CRASH:
CRASH: Failed at /mnt/netapp2/Store_uni/home/uvi/ba/mav/conda/envs/SqueezeMeta/SqueezeMeta/bin/canu-2.2/bin/../lib/site_perl/canu/Execution.pm line 1259.
CRASH:  canu::Execution::submitOrRunParallelJob("NANO", "meryl", "correction/0-mercounts", "meryl-count", 1) called at /mnt/netapp2/Store_uni/home/uvi/ba/mav/conda/envs/SqueezeMeta/SqueezeMeta/bin/canu-2.2/bin/../lib/site_perl/canu/Meryl.pm line 847
CRASH:  canu::Meryl::merylCountCheck("NANO", "cor") called at /mnt/netapp2/Store_uni/home/uvi/ba/mav/conda/envs/SqueezeMeta/SqueezeMeta/bin/canu-2.2/bin/canu line 1076
CRASH: 
CRASH: Last 50 lines of the relevant log file (correction/0-mercounts/meryl-count.jobSubmit-01.out):
CRASH:
CRASH: sbatch: error: Batch job submission failed: Time limit specification required, but not provided

This seems to be cluster-related. Could you ask your system admin about this? Best, J

lorcai commented 1 year ago

Hi,

A bit late but I recently ran across something similar so I will post what I've seen in case it is useful to somebody.

If you use the canu assembler on a cluster it tries to send jobs to the queue but according to the docs it won't include time limits in its calls: https://canu.readthedocs.io/en/latest/faq.html?highlight=grid#how-do-i-run-canu-on-my-slurm-sge-pbs-lsf-torque-system

I had a similar problem because our cluster requires you to specify user account and partition (-A and -p flags on sbatch) and canu does not include them either.

The time limit problem should be solved by passing the flag gridOptions="--time=d-hh:mm:ss" to canu. canu will pass the given options to every job it submits to the queue.

I passed that option to canu through SqueezeMeta with the flag -assembly_options.

So something such as: SqueezeMeta.pl -m assemblymode -p projectname -s samplefile -f sampledir --minion --canumem <mem> -b <block size> -t <threads> -assembly_options "gridOptions='--time=1-00:00'"

Should get SqueezeMeta running and set a time limit of 1 day on the jobs sent by canu.

In my case I needed -assembly_options "gridOptions='-A account -p partition'"

I checked that canu is sending the jobs with the flags, but I still had problems. It seemed that SqueezeMeta tried to continue and access the assembly when canu jobs were still running.

I ended up using -assembly_options "useGrid=false" so that canu would not try to send jobs and that worked but it failed again running diamond.

In the end I ran SqueezeMeta locally for now as this may be a problem from the SqueezeMeta installation in my cluster (running sqm_longreads.pl also fails when collapsing the blast hits due to the perl version not supporting threads).

Regards, Ivan.

pedres commented 1 year ago

Thanks a lot Ivan,

I will try it

lorcai @.***> escribió:

Hi,

A bit late but I recently ran across something similar so I will
post what I've seen in case it is useful to somebody.

If you use the canu assembler on a cluster it tries to send jobs to
the queue but according to the docs it won't include time limits in
its calls: https://urldefense.com/v3/__https://canu.readthedocs.io/en/latest/faq.html?highlight=grid*how-do-i-run-canu-on-my-slurm-sge-pbs-lsf-torque-system__;Iw!!D9dNQwwGXtA!UXezbW5Q6C-au_Pc7w6JjxYAZNvhxtrtaQPwNfvRljJ4OU0nd936iB2DZoIZnYgLAP0IbBGLMp2a_NgCEJhuyw$

I had a similar problem because our cluster requires you to specify
user account and partition (-A and -p flags on sbatch) and canu does
not include them either.

The time limit problem should be solved by passing the flag
gridOptions="--time=d-hh:mm:ss" to canu. canu will pass the given
options to every job it submits to the queue.

I passed that option to canu through SqueezeMeta with the flag
-assembly_options.

So something such as: SqueezeMeta.pl -m assemblymode -p projectname -s samplefile -f sampledir --minion --canumem <mem> -b <block size> -t <threads> -assembly_options "gridOptions='--time=1-00:00'"

Should get SqueezeMeta running and set a time limit of 1 day on the
jobs sent by canu.

In my case I needed -assembly_options "gridOptions='-A account -p partition'"

I checked that canu is sending the jobs with the flags, but I still
had problems. It seemed that SqueezeMeta tried to continue and
access the assembly when canu jobs were still running.

I ended up using -assembly_options "useGrid=false" so that canu
would not try to send jobs and that worked but it failed again
running diamond.

In the end I ran SqueezeMeta locally for now as this may be a
problem from the SqueezeMeta installation in my cluster (running
sqm_longreads.pl also fails when collapsing the blast hits due to
the perl version not supporting threads).

Regards, Ivan.

-- Reply to this email directly or view it on GitHub: https://urldefense.com/v3/__https://github.com/jtamames/SqueezeMeta/issues/612*issuecomment-1451833405__;Iw!!D9dNQwwGXtA!UXezbW5Q6C-au_Pc7w6JjxYAZNvhxtrtaQPwNfvRljJ4OU0nd936iB2DZoIZnYgLAP0IbBGLMp2a_Ni-nNqgZQ$ You are receiving this because you authored the thread.

Message ID: @.***>

fpusan commented 1 year ago

Closing due to lack of activity, feel free to reopen!