Closed hidvegin closed 4 years ago
The issue is that your grid has a default time limit which was exceeded:
slurmstepd: error: *** JOB 388030 ON cn01 CANCELLED AT 2020-02-25T18:54:58 DUE TO TIME LIMIT ***
which looks to be about 2 hrs based on the job start time and the time it was cancelled. As listed on the FAQ (https://canu.readthedocs.io/en/latest/faq.html#how-do-i-run-canu-on-my-slurm-sge-pbs-lsf-torque-system), Canu does not specify time as part of its submit commands, you need to add the gridOptions parameter to explicitly request a longer time limit, given your genome size I'd say 3-4 days (or longer if your grid allows it).
On another note, there is no 2.0 release, you're running an untested commit. I would recommend staying with the 1.9 release for production assembly which you can get from the releases page.
Thank you for your answer. I tried it your suggestion about gridOptions parameter. I tried to set 7 days because this is the maxiumu what I can allocate in the SLURM. I try the 2.0-develop version only for test. I asked the administrator of HPC cluster that install the canu v1.9 version. I have to wait for it. Until that I test the v2.0.
I tried re-run the canu v2.0 with your suggestions. This was the parameter:
canu -correct -p lculinaris -d $HOME/output/canu_trim/lculinaris2 genomeSize=4.0g batMemory=124g batThreads=48 gridOptions="--time=6-00:00:00 --partition=prod --account ID" -pacbio $HOME/input/pacbio_raw/LC001pacbio.fastq.gz
Several days after I get this outputs from canu and erro message:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu/Linux-amd64/bin/canu
Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
-- Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ---------- -------- -----------------------------
-- Grid: meryl 24.000 GB 8 CPUs (k-mer counting)
-- Grid: hap 16.000 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22.000 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: cor 24.000 GB 4 CPUs (read correction)
-- Grid: ovb 4.000 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32.000 GB 1 CPU (overlap store sorting)
-- Grid: red 41.000 GB 8 CPUs (read error detection)
-- Grid: oea 8.000 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124.000 GB 48 CPUs (contig construction with bogart)
-- Grid: cns -.--- GB 8 CPUs (consensus)
-- Grid: gfa 64.000 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio CLR reads:
-- PacBio CLR: 1
--
-- Raw: 1
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2':
-- - only correct raw reads.
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
-- Found 1 Kmer counting (meryl) outputs.
-- No change in report.
-- Finished stage 'cor-merylCountCheck', reset canuIteration.
-- No change in report.
--
-- Running jobs. First attempt out of 2.
--
-- 'meryl-process.jobSubmit-01.sh' -> job 388050 task 1.
--
----------------------------------------
-- Starting command on Thu Feb 27 03:52:01 2020 with 139449.46 GB free disk space
cd /mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2
sbatch \
--depend=afterany:388050 \
--cpus-per-task=1 \
--mem-per-cpu=4g \
--time=6-00:00:00 \
--partition=prod \
--account denolen \
-D `pwd` \
-J 'canu_lculinaris' \
-o canu-scripts/canu.02.out canu-scripts/canu.02.sh
Submitted batch job 388051
-- Finished on Thu Feb 27 03:52:01 2020 (lickety-split) with 139449.46 GB free disk space
----------------------------------------
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu/Linux-amd64/bin/canu
Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
-- Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ---------- -------- -----------------------------
-- Grid: meryl 24.000 GB 8 CPUs (k-mer counting)
-- Grid: hap 16.000 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22.000 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: cor 24.000 GB 4 CPUs (read correction)
-- Grid: ovb 4.000 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32.000 GB 1 CPU (overlap store sorting)
-- Grid: red 41.000 GB 8 CPUs (read error detection)
-- Grid: oea 8.000 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124.000 GB 48 CPUs (contig construction with bogart)
-- Grid: cns -.--- GB 8 CPUs (consensus)
-- Grid: gfa 64.000 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio CLR reads:
-- PacBio CLR: 1
--
-- Raw: 1
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2':
-- - only correct raw reads.
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
-- Meryl finished successfully. Kmer frequency histogram:
--
-- WARNING: gnuplot failed.
--
----------------------------------------
--
-- 16-mers Fraction
-- Occurrences NumMers Unique Total
-- 1- 1 0 0.0000 0.0000
-- 2- 2 64126708 **************** 0.0306 0.0010
-- 3- 4 169270722 ******************************************* 0.0692 0.0029
-- 5- 7 258298328 ****************************************************************** 0.1542 0.0090
-- 8- 11 273347908 ********************************************************************** 0.2713 0.0222
-- 12- 16 242748348 ************************************************************** 0.3920 0.0424
-- 17- 22 201336966 *************************************************** 0.4998 0.0683
-- 23- 29 162750551 ***************************************** 0.5903 0.0979
-- 30- 37 130158672 ********************************* 0.6641 0.1298
-- 38- 46 103801910 ************************** 0.7236 0.1627
-- 47- 56 82789664 ********************* 0.7713 0.1956
-- 57- 67 66296011 **************** 0.8096 0.2279
-- 68- 79 53401713 ************* 0.8404 0.2590
-- 80- 92 43321562 *********** 0.8652 0.2888
-- 93- 106 35361532 ********* 0.8855 0.3171
-- 107- 121 29075935 ******* 0.9020 0.3439
-- 122- 137 24055121 ****** 0.9157 0.3691
-- 138- 154 20033667 ***** 0.9270 0.3928
-- 155- 172 16792932 **** 0.9364 0.4152
-- 173- 191 14151421 *** 0.9443 0.4361
-- 192- 211 12007133 *** 0.9510 0.4558
-- 212- 232 10240165 ** 0.9567 0.4743
-- 233- 254 8778255 ** 0.9615 0.4917
-- 255- 277 7558987 * 0.9657 0.5080
-- 278- 301 6548117 * 0.9693 0.5234
-- 302- 326 5700869 * 0.9724 0.5380
-- 327- 352 4986015 * 0.9751 0.5517
-- 353- 379 4377721 * 0.9774 0.5646
-- 380- 407 3856902 0.9795 0.5769
-- 408- 436 3406540 0.9814 0.5886
-- 437- 466 3027570 0.9830 0.5996
-- 467- 497 2695673 0.9844 0.6101
-- 498- 529 2403736 0.9857 0.6201
-- 530- 562 2157183 0.9868 0.6295
-- 563- 596 1935784 0.9879 0.6386
-- 597- 631 1742844 0.9888 0.6472
-- 632- 667 1573870 0.9896 0.6554
-- 668- 704 1427589 0.9904 0.6633
-- 705- 742 1297292 0.9910 0.6708
-- 743- 781 1181074 0.9917 0.6780
-- 782- 821 1078513 0.9922 0.6849
--
-- 0 (max occurrences)
-- 129844687695 (total mers, non-unique)
-- 2094348602 (distinct mers, non-unique)
-- 0 (unique mers)
-- Report changed.
-- Finished stage 'meryl-process', reset canuIteration.
--
-- Removing meryl database 'correction/0-mercounts/lculinaris.ms16'.
--
-- OVERLAPPER (mhap) (correction)
--
-- Set corMhapSensitivity=normal based on read coverage of 32.
--
-- PARAMETERS: hashes=512, minMatches=3, threshold=0.78
--
-- Given 19.8 GB, can fit 59400 reads per block.
-- For 281 blocks, set stride to 70 blocks.
-- Logging partitioning to 'correction/1-overlapper/partitioning.log'.
-- Configured 280 mhap precompute jobs.
-- Configured 697 mhap overlap jobs.
-- No change in report.
-- Finished stage 'cor-mhapConfigure', reset canuIteration.
-- No change in report.
--
-- Running jobs. First attempt out of 2.
--
-- 'precompute.jobSubmit-01.sh' -> job 388052 tasks 1-280.
--
----------------------------------------
-- Starting command on Thu Feb 27 03:55:19 2020 with 139463.895 GB free disk space
cd /mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2
sbatch \
--depend=afterany:388052 \
--cpus-per-task=1 \
--mem-per-cpu=4g \
--time=6-00:00:00 \
--partition=prod \
--account denolen \
-D `pwd` \
-J 'canu_lculinaris' \
-o canu-scripts/canu.03.out canu-scripts/canu.03.sh
Submitted batch job 388053
-- Finished on Thu Feb 27 03:55:20 2020 (one second) with 139463.135 GB free disk space
----------------------------------------
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu/Linux-amd64/bin/canu
Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
-- Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ---------- -------- -----------------------------
-- Grid: meryl 24.000 GB 8 CPUs (k-mer counting)
-- Grid: hap 16.000 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22.000 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: cor 24.000 GB 4 CPUs (read correction)
-- Grid: ovb 4.000 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32.000 GB 1 CPU (overlap store sorting)
-- Grid: red 41.000 GB 8 CPUs (read error detection)
-- Grid: oea 8.000 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124.000 GB 48 CPUs (contig construction with bogart)
-- Grid: cns -.--- GB 8 CPUs (consensus)
-- Grid: gfa 64.000 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio CLR reads:
-- PacBio CLR: 1
--
-- Raw: 1
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2':
-- - only correct raw reads.
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
--
-- OVERLAPPER (mhap) (correction) complete, not rewriting scripts.
--
-- All 280 mhap precompute jobs finished successfully.
-- No change in report.
-- Finished stage 'cor-mhapPrecomputeCheck', reset canuIteration.
-- No change in report.
--
-- Running jobs. First attempt out of 2.
--
-- 'mhap.jobSubmit-01.sh' -> job 388333 tasks 1-511.
-- 'mhap.jobSubmit-02.sh' -> job 388334 tasks 512-697.
--
----------------------------------------
-- Starting command on Thu Feb 27 09:26:33 2020 with 139143.537 GB free disk space
cd /mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2
sbatch \
--depend=afterany:388333:388334 \
--cpus-per-task=1 \
--mem-per-cpu=4g \
--time=6-00:00:00 \
--partition=prod \
--account denolen \
-D `pwd` \
-J 'canu_lculinaris' \
-o canu-scripts/canu.04.out canu-scripts/canu.04.sh
Submitted batch job 388335
-- Finished on Thu Feb 27 09:26:33 2020 (furiously fast) with 139143.537 GB free disk space
----------------------------------------
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu/Linux-amd64/bin/canu
Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
-- Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ---------- -------- -----------------------------
-- Grid: meryl 24.000 GB 8 CPUs (k-mer counting)
-- Grid: hap 16.000 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22.000 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: cor 24.000 GB 4 CPUs (read correction)
-- Grid: ovb 4.000 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32.000 GB 1 CPU (overlap store sorting)
-- Grid: red 41.000 GB 8 CPUs (read error detection)
-- Grid: oea 8.000 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124.000 GB 48 CPUs (contig construction with bogart)
-- Grid: cns -.--- GB 8 CPUs (consensus)
-- Grid: gfa 64.000 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio CLR reads:
-- PacBio CLR: 1
--
-- Raw: 1
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2':
-- - only correct raw reads.
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
--
-- OVERLAPPER (mhap) (correction) complete, not rewriting scripts.
--
--
-- Mhap overlap jobs failed, retry.
-- job correction/1-overlapper/results/000247.ovb FAILED.
-- job correction/1-overlapper/results/000304.ovb FAILED.
-- job correction/1-overlapper/results/000305.ovb FAILED.
-- job correction/1-overlapper/results/000307.ovb FAILED.
-- job correction/1-overlapper/results/000308.ovb FAILED.
-- job correction/1-overlapper/results/000322.ovb FAILED.
-- job correction/1-overlapper/results/000323.ovb FAILED.
-- job correction/1-overlapper/results/000325.ovb FAILED.
-- job correction/1-overlapper/results/000326.ovb FAILED.
-- job correction/1-overlapper/results/000327.ovb FAILED.
-- job correction/1-overlapper/results/000328.ovb FAILED.
-- job correction/1-overlapper/results/000329.ovb FAILED.
-- job correction/1-overlapper/results/000331.ovb FAILED.
-- job correction/1-overlapper/results/000332.ovb FAILED.
-- job correction/1-overlapper/results/000333.ovb FAILED.
-- job correction/1-overlapper/results/000334.ovb FAILED.
-- job correction/1-overlapper/results/000335.ovb FAILED.
-- job correction/1-overlapper/results/000336.ovb FAILED.
-- job correction/1-overlapper/results/000337.ovb FAILED.
-- job correction/1-overlapper/results/000338.ovb FAILED.
-- job correction/1-overlapper/results/000339.ovb FAILED.
-- job correction/1-overlapper/results/000340.ovb FAILED.
-- job correction/1-overlapper/results/000341.ovb FAILED.
-- job correction/1-overlapper/results/000342.ovb FAILED.
-- job correction/1-overlapper/results/000343.ovb FAILED.
-- job correction/1-overlapper/results/000344.ovb FAILED.
-- job correction/1-overlapper/results/000345.ovb FAILED.
-- job correction/1-overlapper/results/000346.ovb FAILED.
-- job correction/1-overlapper/results/000347.ovb FAILED.
-- job correction/1-overlapper/results/000348.ovb FAILED.
-- job correction/1-overlapper/results/000349.ovb FAILED.
-- job correction/1-overlapper/results/000350.ovb FAILED.
-- job correction/1-overlapper/results/000351.ovb FAILED.
-- job correction/1-overlapper/results/000353.ovb FAILED.
-- job correction/1-overlapper/results/000354.ovb FAILED.
-- job correction/1-overlapper/results/000355.ovb FAILED.
-- job correction/1-overlapper/results/000357.ovb FAILED.
-- job correction/1-overlapper/results/000358.ovb FAILED.
--
-- No change in report.
--
-- Running jobs. Second attempt out of 2.
--
-- 'mhap.jobSubmit-01.sh' -> job 389042 task 247.
-- 'mhap.jobSubmit-02.sh' -> job 389043 tasks 304-305.
-- 'mhap.jobSubmit-03.sh' -> job 389044 tasks 307-308.
-- 'mhap.jobSubmit-04.sh' -> job 389045 tasks 322-323.
-- 'mhap.jobSubmit-05.sh' -> job 389046 tasks 325-329.
-- 'mhap.jobSubmit-06.sh' -> job 389047 tasks 331-351.
-- 'mhap.jobSubmit-07.sh' -> job 389048 tasks 353-355.
-- 'mhap.jobSubmit-08.sh' -> job 389049 tasks 357-358.
--
----------------------------------------
-- Starting command on Mon Mar 2 10:56:00 2020 with 129760.222 GB free disk space
cd /mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2
sbatch \
--depend=afterany:389042:389043:389044:389045:389046:389047:389048:389049 \
--cpus-per-task=1 \
--mem-per-cpu=5g \
--time=6-00:00:00 \
--partition=prod \
--account denolen \
-D `pwd` \
-J 'canu_lculinaris' \
-o canu-scripts/canu.05.out canu-scripts/canu.05.sh
Submitted batch job 389050
-- Finished on Mon Mar 2 10:56:00 2020 (furiously fast) with 129760.215 GB free disk space
----------------------------------------
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu/Linux-amd64/bin/canu
Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
-- Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ---------- -------- -----------------------------
-- Grid: meryl 24.000 GB 8 CPUs (k-mer counting)
-- Grid: hap 16.000 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22.000 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: cor 24.000 GB 4 CPUs (read correction)
-- Grid: ovb 4.000 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32.000 GB 1 CPU (overlap store sorting)
-- Grid: red 41.000 GB 8 CPUs (read error detection)
-- Grid: oea 8.000 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124.000 GB 48 CPUs (contig construction with bogart)
-- Grid: cns -.--- GB 8 CPUs (consensus)
-- Grid: gfa 64.000 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio CLR reads:
-- PacBio CLR: 1
--
-- Raw: 1
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_trim/lculinaris2':
-- - only correct raw reads.
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
--
-- OVERLAPPER (mhap) (correction) complete, not rewriting scripts.
--
--
-- Mhap overlap jobs failed, tried 2 times, giving up.
-- job correction/1-overlapper/results/000247.ovb FAILED.
-- job correction/1-overlapper/results/000304.ovb FAILED.
-- job correction/1-overlapper/results/000305.ovb FAILED.
-- job correction/1-overlapper/results/000307.ovb FAILED.
-- job correction/1-overlapper/results/000308.ovb FAILED.
-- job correction/1-overlapper/results/000322.ovb FAILED.
-- job correction/1-overlapper/results/000323.ovb FAILED.
-- job correction/1-overlapper/results/000325.ovb FAILED.
-- job correction/1-overlapper/results/000326.ovb FAILED.
-- job correction/1-overlapper/results/000327.ovb FAILED.
-- job correction/1-overlapper/results/000328.ovb FAILED.
-- job correction/1-overlapper/results/000329.ovb FAILED.
-- job correction/1-overlapper/results/000331.ovb FAILED.
-- job correction/1-overlapper/results/000332.ovb FAILED.
-- job correction/1-overlapper/results/000333.ovb FAILED.
-- job correction/1-overlapper/results/000334.ovb FAILED.
-- job correction/1-overlapper/results/000335.ovb FAILED.
-- job correction/1-overlapper/results/000336.ovb FAILED.
-- job correction/1-overlapper/results/000337.ovb FAILED.
-- job correction/1-overlapper/results/000338.ovb FAILED.
-- job correction/1-overlapper/results/000339.ovb FAILED.
-- job correction/1-overlapper/results/000340.ovb FAILED.
-- job correction/1-overlapper/results/000341.ovb FAILED.
-- job correction/1-overlapper/results/000342.ovb FAILED.
-- job correction/1-overlapper/results/000343.ovb FAILED.
-- job correction/1-overlapper/results/000344.ovb FAILED.
-- job correction/1-overlapper/results/000345.ovb FAILED.
-- job correction/1-overlapper/results/000346.ovb FAILED.
-- job correction/1-overlapper/results/000347.ovb FAILED.
-- job correction/1-overlapper/results/000348.ovb FAILED.
-- job correction/1-overlapper/results/000349.ovb FAILED.
-- job correction/1-overlapper/results/000350.ovb FAILED.
-- job correction/1-overlapper/results/000351.ovb FAILED.
-- job correction/1-overlapper/results/000353.ovb FAILED.
-- job correction/1-overlapper/results/000354.ovb FAILED.
-- job correction/1-overlapper/results/000355.ovb FAILED.
-- job correction/1-overlapper/results/000357.ovb FAILED.
-- job correction/1-overlapper/results/000358.ovb FAILED.
--
ABORT:
ABORT: Canu snapshot v2.0-development +375 changes (r9868 9492acc56ebe1ff0c7ee7b13ae1704a7db68dc5d)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
How can I resolve this issue?
Post one of the failing job logs, it's likely a JVM or grid issue.
I'll again reiterate that running the unreleased version is not recommended, it's not an appropriate test since you'll probably hit errors not present in the release. You don't need to wait to have a release installed. If you were able to download/compile the tip, you can also just download/compile the release tar instead or just use extract the pre-compiled binaries in your home directory.
I installed the canu v1.9 as you suggested. At the ovlB phase the cluster went down because there was an storage problem with the cluster. I re-run the canu with the same parameters. I got this error:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/canu
Canu 1.9
-- Canu 1.9
--
-- CITATIONS
--
-- Koren S, Walenz BP, Berlin K, Miller JR, Phillippy AM.
-- Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation.
-- Genome Res. 2017 May;27(5):722-736.
-- http://doi.org/10.1101/gr.215087.116
--
-- Koren S, Rhie A, Walenz BP, Dilthey AT, Bickhart DM, Kingan SB, Hiendleder S, Williams JL, Smith TPL, Phillippy AM.
-- De novo assembly of haplotype-resolved genomes with trio binning.
-- Nat Biotechnol. 2018
-- https//doi.org/10.1038/nbt.4277
--
-- Read and contig alignments during correction, consensus and GFA building use:
-- Šošic M, Šikic M.
-- Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance.
-- Bioinformatics. 2017 May 1;33(9):1394-1395.
-- http://doi.org/10.1093/bioinformatics/btw753
--
-- Overlaps are generated using:
-- Berlin K, et al.
-- Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.
-- Nat Biotechnol. 2015 Jun;33(6):623-30.
-- http://doi.org/10.1038/nbt.3238
--
-- Myers EW, et al.
-- A Whole-Genome Assembly of Drosophila.
-- Science. 2000 Mar 24;287(5461):2196-204.
-- http://doi.org/10.1126/science.287.5461.2196
--
-- Corrected read consensus sequences are generated using an algorithm derived from FALCON-sense:
-- Chin CS, et al.
-- Phased diploid genome assembly with single-molecule real-time sequencing.
-- Nat Methods. 2016 Dec;13(12):1050-1054.
-- http://doi.org/10.1038/nmeth.4035
--
-- Contig consensus sequences are generated using an algorithm derived from pbdagcon:
-- Chin CS, et al.
-- Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data.
-- Nat Methods. 2013 Jun;10(6):563-9
-- http://doi.org/10.1038/nmeth.2474
--
-- CONFIGURE CANU
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ------ -------- -----------------------------
-- Grid: meryl 24 GB 8 CPUs (k-mer counting)
-- Grid: hap 16 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24 GB 16 CPUs (overlap detection)
-- Grid: cor 24 GB 4 CPUs (read correction)
-- Grid: ovb 4 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32 GB 1 CPU (overlap store sorting)
-- Grid: red 41 GB 8 CPUs (read error detection)
-- Grid: oea 8 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124 GB 48 CPUs (contig construction with bogart)
-- Grid: cns --- GB 8 CPUs (consensus)
-- Grid: gfa 64 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio reads:
-- Raw: 16602048
-- Corrected: 0
-- Trimmed: 0
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_corrected/lculinaris'
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
--
-- Creating overlap store correction/lculinaris.ovlStore using:
-- 655 buckets
-- 655 slices
-- using at most 29 GB memory each
--
-- Overlap store bucketizer jobs failed, tried 2 times, giving up.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0001 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0002 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0003 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0004 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0005 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0006 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0007 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0008 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0009 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0010 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0011 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0012 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0013 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0014 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0015 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0016 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0017 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0018 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0019 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0020 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0021 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0022 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0023 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0024 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0025 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0026 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0027 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0028 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0029 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0030 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0031 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0032 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0033 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0034 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0035 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0036 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0037 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0038 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0039 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0040 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0041 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0042 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0043 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0044 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0045 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0046 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0047 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0048 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0049 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0050 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0051 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0052 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0053 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0054 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0055 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0056 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0057 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0058 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0059 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0060 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0061 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0062 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0063 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0064 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0065 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0066 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0067 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0068 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0069 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0070 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0071 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0072 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0073 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0074 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0075 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0076 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0077 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0078 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0079 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0080 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0081 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0082 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0083 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0084 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0085 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0086 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0087 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0088 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0089 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0090 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0091 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0092 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0093 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0094 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0095 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0096 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0097 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0098 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0099 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0100 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0101 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0102 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0103 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0104 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0105 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0106 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0107 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0108 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0109 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0110 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0111 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0112 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0113 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0114 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0115 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0116 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0117 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0118 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0119 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0120 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0121 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0122 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0123 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0124 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0125 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0126 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0127 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0128 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0129 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0130 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0131 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0132 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0133 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0134 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0135 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0136 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0137 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0138 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0139 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0141 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0142 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0143 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0144 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0145 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0146 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0147 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0148 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0149 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0150 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0151 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0152 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0153 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0154 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0155 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0156 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0157 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0158 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0159 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0160 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0161 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0162 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0163 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0164 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0165 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0166 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0167 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0168 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0169 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0170 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0171 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0172 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0173 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0174 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0175 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0176 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0177 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0178 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0179 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0180 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0181 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0182 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0183 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0184 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0185 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0186 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0187 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0189 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0190 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0191 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0193 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0194 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0195 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0197 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0198 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0199 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0201 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0202 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0203 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0205 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0206 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0207 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0209 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0210 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0211 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0213 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0214 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0215 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0217 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0218 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0219 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0223 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0225 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0226 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0227 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0229 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0230 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0231 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0233 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0234 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0257 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0258 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0259 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0261 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0262 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0263 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0265 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0266 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0267 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0269 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0270 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0271 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0273 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0274 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0275 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0277 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0278 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0279 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0280 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0281 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0282 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0283 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0284 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0285 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0286 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0287 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0288 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0289 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0290 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0291 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0292 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0293 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0294 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0295 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0296 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0297 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0298 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0299 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0300 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0301 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0302 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0303 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0326 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0327 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0328 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0329 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0330 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0331 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0332 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0333 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0334 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0335 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0336 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0337 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0338 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0339 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0340 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0341 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0342 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0343 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0344 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0345 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0346 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0347 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0348 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0349 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0350 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0351 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0352 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0353 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0354 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0355 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0356 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0357 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0358 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0359 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0360 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0361 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0362 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0363 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0364 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0365 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0366 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0367 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0368 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0369 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0370 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0371 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0372 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0373 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0374 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0376 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0377 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0379 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0380 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0382 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0383 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0385 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0386 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0388 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0389 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0391 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0392 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0394 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0395 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0397 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0398 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0400 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0401 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0403 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0404 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0406 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0407 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0409 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0410 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0412 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0413 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0415 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0416 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0418 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0419 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0420 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0421 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0422 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0423 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0424 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0425 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0426 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0427 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0428 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0429 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0430 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0431 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0432 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0433 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0434 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0435 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0436 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0437 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0438 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0439 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0440 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0441 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0442 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0443 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0444 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0445 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0446 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0447 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0448 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0449 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0450 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0451 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0452 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0453 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0454 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0455 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0456 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0457 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0458 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0459 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0460 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0461 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0462 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0463 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0464 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0465 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0466 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0467 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0469 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0470 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0472 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0473 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0475 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0476 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0478 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0479 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0481 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0482 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0483 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0484 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0485 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0487 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0488 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0489 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0490 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0491 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0492 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0493 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0494 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0495 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0496 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0497 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0498 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0499 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0500 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0501 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0502 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0503 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0504 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0505 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0506 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0507 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0508 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0509 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0510 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0511 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0512 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0513 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0514 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0515 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0516 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0517 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0518 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0519 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0520 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0521 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0522 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0523 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0524 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0525 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0526 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0527 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0528 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0529 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0530 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0531 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0532 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0533 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0534 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0535 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0536 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0537 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0538 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0539 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0541 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0543 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0545 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0547 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0549 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0551 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0552 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0553 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0555 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0557 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0559 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0561 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0562 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0563 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0564 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0565 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0566 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0567 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0568 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0569 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0570 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0571 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0572 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0573 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0574 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0575 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0576 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0577 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0578 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0579 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0580 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0581 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0582 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0583 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0584 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0585 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0586 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0587 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0588 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0589 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0590 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0591 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0592 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0593 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0594 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0595 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0596 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0597 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0598 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0599 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0600 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0601 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0602 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0603 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0604 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0605 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0606 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0607 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0609 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0611 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0613 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0614 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0615 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0617 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0619 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0621 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0622 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0623 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0625 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0626 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0627 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0628 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0629 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0630 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0631 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0632 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0633 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0634 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0635 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0636 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0637 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0638 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0639 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0640 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0641 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0642 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0643 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0644 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0645 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0646 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0647 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0648 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0649 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0650 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0651 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0652 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0653 FAILED.
-- job correction/lculinaris.ovlStore.BUILDING/bucket0654 FAILED.
--
ABORT:
ABORT: Canu 1.9
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
Should I delete the /correction/lculinaris.ovlStore.BUILDING/ folder and after that re-run the canu with the same parameter which I used for generate the 0-mercounts and 1-overlapper folders? When I re-run the canu I did not delete the lculinaris.ovlStore.BUILDING folder.
On the suspicion that it was these jobs that killed your file server, instead of deleting the ovlStore directory and restarting canu, lets run these by hand with a slight modification to limit the number of jobs that run at the same time.
1) Remove all the bucket*
directories in lculinaris.ovlStore.BUILDING.
2) Edit scripts/1-bucketize.jobSubmit-01.sh
to change the -a option from something similar to -a 1-654
to -a 1-654%20
. This will tell slurm to run only 20 jobs at the same time. I picked 20 for no particular reason. It's probably wildly conservative.
3) Still in directory lculinaris.ovlStore.BUILDING (that is, NOT in the scripts/ directory) submit the jobs to the grid with sh scripts/1-bucketize.jobSubmit-01.sh
4) Wait for them to finish and then restart canu.
Or you can delete the ovlStore directory entirely, restart canu, and see if you get a nasty email from your system managers. It depends on your mood, I guess....
Thank you for your answer. I deleted all of the bucket* folders. I changed the scripts/1-bucketize.jobSubmit-01.sh to this:
#!/bin/sh
sbatch \
--cpus-per-task=1 --mem-per-cpu=5g --time=7-00:00:00 --partition=prod --account denolen -o logs/1-bucketize.%A_%a.out \
-D `pwd` -J "ovB_lculinaris" \
-a 1-654%20 \
`pwd`/scripts/1-bucketize.sh 0 \
> ./scripts/1-bucketize.jobSubmit-01.out 2>&1
Should I change anything else? If I understand you, I should run this script with sh scripts/1-bucketize.jobSubmit-01.sh
in lculinaris.ovlStore.BUILDING folder.
So I do not have to use any other sbatch script to the SLURM. And if this script will be finished I can re-run the canu with the same paramaters which I used before.
Yes, that's right run the script from inside the BUILDING folder. You shouldn't have to run any other commands. Wait until all the jobs have completed (using SLURM to monitor the jobs). There will be log files output in the logs/1-bucket*
files. You can check if they're reporting any errors or not. If they complete and you don't see any errors, then you can re-run the same canu command and parameters you used before.
I tried delete all of the BUILDING folder and rerun canu with the same parameters which I used before. Now since three days run the canu, but I get this error type:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/canu
Canu 1.9
Running job 101 based on SLURM_ARRAY_TASK_ID=101 and offset=0.
Attempting to increase maximum allowed processes and open files.
Max processes per user limited to 514718, no increase possible.
Max open files limited to 131072, no increase possible.
Opened '../lculinaris.seqStore' with 16602048 reads.
Constructing slice 101 for store './lculinaris.ovlStore.BUILDING'.
- Filtering overlaps over 1.0000 fraction error.
Bucketizing input 1 out of 1 - '1-overlapper/results/000101.ovb'
Success!
slurmstepd: error: Unable to send job complete message: Unable to contact slurm controller (connect failure)
All of the 654 process (slice) are running but I found several directories with bucketXXXX (XXXX=numbers). I think that several bucketizing jobs finished but canu can not send the complete message to the SLURM. How could I resolve this? How could I find which process is finished which not?
I wouldn't worry about it as the end of the job seems to indicate it finished, slurm just had an issue shutting the job down. Canu has tolerance for intermitted grid errors so it should find any jobs which did not finish for you and re-run them. I'd check your scheduler for remaining canu bucketizing jobs (if any) and the main canu job (to make sure the grid error didn't delete jobs in the queue).
Idle, seemed to be intermediate grid failure.
@brianwalenz and @skoren I tried run canu at bucketizing jobs. I have got 654 processes (slice) at this job. This 654 slices cause high IO at the HDD on the node and cause failure. The system managers asked me to reduce the process run on the same time. How could I reduce this slices on the same time?
I tried this script which @brianwalenz suggested to me:
#!/bin/sh
sbatch \
--cpus-per-task=1 --mem-per-cpu=5g --time=7-00:00:00 --partition=prod --account denolen -o logs/1-bucketize.%A_%a.out \
-D `pwd` -J "ovB_lculinaris" \
-a 1-654%20 \
`pwd`/scripts/1-bucketize.sh 0 \
> ./scripts/1-bucketize.jobSubmit-01.out 2>&1
But I get this error message in the 1-bucketize.jobSubmit-01.out
file:
sbatch: error: Batch job submission failed: Invalid job array specification
This is the 1-bucketize.sh
file:
#!/bin/sh
# Path to Canu.
syst=`uname -s`
arch=`uname -m | sed s/x86_64/amd64/`
bin="/mnt/stori/home/fk8jybr/canu-1.9/$syst-$arch/bin"
if [ ! -d "$bin" ] ; then
bin="/mnt/stori/home/fk8jybr/canu-1.9"
fi
# Report paths.
echo ""
echo "Found perl:"
echo " " `which perl`
echo " " `perl --version | grep version`
echo ""
echo "Found java:"
echo " " `which java`
echo " " `java -showversion 2>&1 | head -n 1`
echo ""
echo "Found canu:"
echo " " $bin/canu
echo " " `$bin/canu -version`
echo ""
# Environment for any object storage.
export CANU_OBJECT_STORE_CLIENT=
export CANU_OBJECT_STORE_CLIENT_UA=
export CANU_OBJECT_STORE_CLIENT_DA=
export CANU_OBJECT_STORE_NAMESPACE=
export CANU_OBJECT_STORE_PROJECT=
# Discover the job ID to run, from either a grid environment variable and a
# command line offset, or directly from the command line.
#
if [ x$SLURM_ARRAY_TASK_ID = x -o x$SLURM_ARRAY_TASK_ID = xundefined -o x$SLURM_ARRAY_TASK_ID = x0 ]; then
baseid=$1
offset=0
else
baseid=$SLURM_ARRAY_TASK_ID
offset=$1
fi
if [ x$offset = x ]; then
offset=0
fi
if [ x$baseid = x ]; then
echo Error: I need SLURM_ARRAY_TASK_ID set, or a job index on the command line.
exit
fi
jobid=`expr -- $baseid + $offset`
if [ x$SLURM_ARRAY_TASK_ID = x ]; then
echo Running job $jobid based on command line options.
else
echo Running job $jobid based on SLURM_ARRAY_TASK_ID=$SLURM_ARRAY_TASK_ID and offset=$offset.
fi
echo ""
echo "Attempting to increase maximum allowed processes and open files."
max=`ulimit -Hu`
bef=`ulimit -Su`
if [ $bef -lt $max ] ; then
ulimit -Su $max
aft=`ulimit -Su`
echo " Changed max processes per user from $bef to $aft (max $max)."
else
echo " Max processes per user limited to $bef, no increase possible."
fi
max=`ulimit -Hn`
bef=`ulimit -Sn`
if [ $bef -lt $max ] ; then
ulimit -Sn $max
aft=`ulimit -Sn`
echo " Changed max open files from $bef to $aft (max $max)."
else
echo " Max open files limited to $bef, no increase possible."
fi
echo ""
# This script should be executed from correction/lculinaris.ovlStore.BUILDING/, but the binary needs
# to run from correction/ (all the paths in the config are relative to there).
cd ..
jobname=`printf %04d $jobid`
if [ -e ./lculinaris.ovlStore.BUILDING/bucket$jobname ] ; then
echo "Bucketizing job finished; directory './lculinaris.ovlStore.BUILDING/bucket$jobname' exists."
exit
fi
#
# Bucketize!
#
$bin/ovStoreBucketizer \
-O ./lculinaris.ovlStore.BUILDING \
-S ../lculinaris.seqStore \
-C ./lculinaris.ovlStore.config \
-f \
-b $jobid
The script looks correct as is the array specification (at least on our version of slurm). You should check with your admin group to help debug it. Is there a limit on the number of jobs in the array below 654? Is there another way to limit number of concurrent jobs?
@skoren
Thank you for your answer. Yes, the SLURM has got a limitation of jobs in the array below 655 (exactly the maximum at 512). I checked I should use 655 slices. So, I have to make two different 1-bucketize.jobSubmit-01.sh
and 1-bucketize.jobSubmit-02.sh
For 1-bucketize.jobSubmit-01.sh
:
#!/bin/sh
sbatch \
--cpus-per-task=1 --mem-per-cpu=5g --time=7-00:00:00 --partition=prod --account denolen -o logs/1-bucketize.%A_%a.out \
-D `pwd` -J "ovB_lculinaris" \
-a 1-511%20 \
`pwd`/scripts/1-bucketize.sh 0 \
> ./scripts/1-bucketize.jobSubmit-01.out 2>&1
For 1-bucketize.jobSubmit-02.sh
:
#!/bin/sh
sbatch \
--cpus-per-task=1 --mem-per-cpu=5g --time=7-00:00:00 --partition=prod --account denolen -o logs/1-bucketize.%A_%a.out \
-D `pwd` -J "ovB_lculinaris" \
-a 1-142%20 \
`pwd`/scripts/1-bucketize.sh 0 \
> ./scripts/1-bucketize.jobSubmit-01.out 2>&1
Should I change somehow the 0
at `pwd
/scripts/1-bucketize.sh 0 ` ?
Yes, canu normally does this splitting for you. You should pass the offset to the second job, so something like:
#!/bin/sh
sbatch \
--cpus-per-task=1 --mem-per-cpu=5g --time=7-00:00:00 --partition=prod --account denolen -o logs/1-bucketize.%A_%a.out \
-D `pwd` -J "ovB_lculinaris" \
-a 1-511%20 \
`pwd`/scripts/1-bucketize.sh 0 \
> ./scripts/1-bucketize.jobSubmit-01.out 2>&1
and
#!/bin/sh
sbatch \
--cpus-per-task=1 --mem-per-cpu=5g --time=7-00:00:00 --partition=prod --account denolen -o logs/1-bucketize.%A_%a.out \
-D `pwd` -J "ovB_lculinaris" \
-a 1-143%20 \
`pwd`/scripts/1-bucketize.sh 511 \
> ./scripts/1-bucketize.jobSubmit-02.out 2>&1
So the second set of jobs will run jobs 511+1=512 to 511+143 = 654. If you have issues with I/O in this step, I expect you'll have issues in the next sorting step as well. You can make canu use this 20 job limit for all its array by using the parameter gridEngineArrayOption="-a ARRAY_JOBS%20"
when you re-start.
So, If I change my canu parameters to this, canu will use only 20 job for all arrays:
canu -correct -p lculinaris -d $HOME/output/canu_corrected/lculinaris genomeSize=4.0g batMemory=124g batThreads=48 gridOptions="--time=3-00:00:00 --partition=prod --account denolen" gridEngineArrayOption="-a ARRAY_JOBS%20" -pacbio-raw $HOME/input/pacbio_raw/LC001pacbio.fastq.gz
Yep, it will always pass the %20.
I started re-run canu with gridEngineArrayOption="-a ARRAY_JOBS%20. I get this error in core.XXX file generating:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1409286144 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2754), pid=642, tid=0x00007f6dc8434700
#
# JRE version: (8.0_242-b08) (build )
# Java VM: OpenJDK 64-Bit Server VM (25.242-b08 mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /mnt/stori/home/fk8jybr/output/canu_corrected/lculinaris/correction/lculinaris.ovlStore.BUILDING/core or core.642
#
--------------- T H R E A D ---------------
Current thread (0x00007f6dc000a000): JavaThread "Unknown thread" [_thread_in_vm, id=656, stack(0x00007f6dc8335000,0x00007f6dc8435000)]
Stack: [0x00007f6dc8335000,0x00007f6dc8435000], sp=0x00007f6dc8433420, free space=1017k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x9fb5cd] VMError::report_and_die()+0x15d
V [libjvm.so+0x4c217a] report_vm_out_of_memory(char const*, int, unsigned long, VMErrorType, char const*)+0x8a
V [libjvm.so+0x861ca8] os::pd_commit_memory(char*, unsigned long, unsigned long, bool)+0xd8
V [libjvm.so+0x85957f] os::commit_memory(char*, unsigned long, unsigned long, bool)+0x1f
V [libjvm.so+0x9f8314] VirtualSpace::expand_by(unsigned long, bool)+0x1b4
V [libjvm.so+0x5abd0d] CardGeneration::CardGeneration(ReservedSpace, unsigned long, int, GenRemSet*)+0xed
V [libjvm.so+0x99f355] TenuredGeneration::TenuredGeneration(ReservedSpace, unsigned long, int, GenRemSet*)+0x65
V [libjvm.so+0x5ac951] GenerationSpec::init(ReservedSpace, int, GenRemSet*)+0x101
V [libjvm.so+0x59e5f8] GenCollectedHeap::initialize()+0x1d8
V [libjvm.so+0x9c6c19] Universe::initialize_heap()+0x189
V [libjvm.so+0x9c6e13] universe_init()+0x33
V [libjvm.so+0x5ea1e5] init_globals()+0x55
V [libjvm.so+0x9aa454] Threads::create_vm(JavaVMInitArgs*, bool*)+0x284
V [libjvm.so+0x658191] JNI_CreateJavaVM+0x51
C [libjli.so+0x7998] JavaMain+0x88
C [libpthread.so.0+0x7e65] start_thread+0xc5
--------------- P R O C E S S ---------------
Java Threads: ( => current thread )
Other Threads:
=>0x00007f6dc000a000 (exited) JavaThread "Unknown thread" [_thread_in_vm, id=656, stack(0x00007f6dc8335000,0x00007f6dc8435000)]
VM state:not at safepoint (not fully initialized)
VM Mutex/Monitor currently owned by a thread: None
heap address: 0x0000000080200000, size: 30718 MB, Compressed Oops mode: Non-zero based:0x00000000801ff000, Oop shift amount: 3
Narrow klass base: 0x0000000000000000, Narrow klass shift: 0
GC Heap History (0 events):
No events
Deoptimization events (0 events):
No events
Classes redefined (0 events):
No events
Internal exceptions (0 events):
No events
Events (0 events):
No events
That's not a Canu error and certainly has nothing to do with %20 but an issue with the grid system. The node JVM on your node failed which means your node was overloaded and not reserving memory properly. The only reason JVM is open during store building is to check the version and it was only trying to use 1gb memory whereas the bucketizing jobs had asked for 5gb. This means the cluster isn't properly reserving memory for jobs or at least wasn't for this job. I'd suggest checking with your IT why that is the case but nothing we can do in canu to fix that.
I will try use the node default openjdk because until now I used openjdk from homebrew. Which files should I delete from the process directory? I would like to restrat canu from the latest finished job.
You don't need to delete any files, it should restart correctly itself.
I restarted canu and I get this error:
ERROR: mhap overlapper requires java version at least 1.8.0; you have unknown (from 'java').
ERROR: 'java -Xmx1g -showversion' reports:
But If I use java -Xmx1g -showversion
:
java -Xmx1g -showversion
openjdk version "11.0.6" 2020-01-14 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.6+10-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.6+10-LTS, mixed mode)
Usage: java [options] <mainclass> [args...]
(to execute a class)
or java [options] -jar <jarfile> [args...]
(to execute a jar file)
or java [options] -m <module>[/<mainclass>] [args...]
java [options] --module <module>[/<mainclass>] [args...]
(to execute the main class in a module)
or java [options] <sourcefile> [args]
(to execute a single source-file program)
Arguments following the main class, source file, -jar <jarfile>,
-m or --module <module>/<mainclass> are passed as the arguments to
main class.
where options include:
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
--class-path <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-p <module path>
--module-path <module path>...
A : separated list of directories, each directory
is a directory of modules.
--upgrade-module-path <module path>...
A : separated list of directories, each directory
is a directory of modules that replace upgradeable
modules in the runtime image
--add-modules <module name>[,<module name>...]
root modules to resolve in addition to the initial module.
<module name> can also be ALL-DEFAULT, ALL-SYSTEM,
ALL-MODULE-PATH.
--list-modules
list observable modules and exit
-d <module name>
--describe-module <module name>
describe a module and exit
--dry-run create VM and load main class but do not execute main method.
The --dry-run option may be useful for validating the
command-line options such as the module system configuration.
--validate-modules
validate all modules and exit
The --validate-modules option may be useful for finding
conflicts and other errors with modules on the module path.
-D<name>=<value>
set a system property
-verbose:[class|module|gc|jni]
enable verbose output
-version print product version to the error stream and exit
--version print product version to the output stream and exit
-showversion print product version to the error stream and continue
--show-version
print product version to the output stream and continue
--show-module-resolution
show module resolution output during startup
-? -h -help
print this help message to the error stream
--help print this help message to the output stream
-X print help on extra options to the error stream
--help-extra print help on extra options to the output stream
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:jdwp
see also -agentlib:jdwp=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
HiDPI scaled images are automatically supported and used
if available. The unscaled image filename, e.g. image.ext,
should always be passed as the argument to the -splash option.
The most appropriate scaled image provided will be picked up
automatically.
See the SplashScreen API documentation for more information
@argument files
one or more argument files containing options
-disable-@files
prevent further argument file expansion
--enable-preview
allow classes to depend on preview features of this release
To specify an argument for a long option, you can use --<name>=<value> or
--<name> <value>.
It seems to me there are java on the node but maybe canu cannot detect it. Somehow could I solve this?
How did you get this far if Java wasn't working properly, you're already past the steps that use java?
Canu just runs the command java
so if it's not working it's because that java isn't available in the path on the compute nodes. You should provide the full path to whatever java you want to use using the java="/full/path/to/java/bin"
parameter.
However, I doubt the JVM is the source of the issue. The original error was an out of memory issue on the node when you were using 1/5 reserved GB. That implies the node was not properly holding the memory you asked for. I'd go back and try to find out how much memory that job was using/how much it had reserved/requested/and why it was unable to run. You might need to get help from your IT for that.
I used java from homebrew and this worked very well until that error with memory. But I think that the error caused that I did not used the default java on the node. But it seems the error is different. I will ask some help from the IT to solve this issue.
The default java on the node is not set so I'm not sure you can avoid using home-brew (or you wouldn't have gotten that error about versions). You can use a non-homebrew java (assuming which java is indeed not a homebrew version) if you give a full path to its binary, as long as it is also available on the compute nodes.
canu would like to run this script:
#!/bin/sh
sbatch \
--cpus-per-task=1 --mem-per-cpu=39680m --time=7-00:00:00 --partition=prod --account denolen -o logs/2-sort.%A_%a.out \
-D `pwd` -J "ovS_lculinaris" \
-a 1-20%20 \
`pwd`/scripts/2-sort.sh 0 \
> ./scripts/2-sort.jobSubmit-01.out 2>&1
I think the error will be that the node which I use it is 48 CPU and 148 GB ram. canu would like 39.6 GB ram and 1 CPU/process. SLURM run somehow the 20 process run in the same time on the same node and 20x39.6GB is not available from the maximum 148GB ram/node. I wrote an email to the IT, I hope they will be resolve this error with the SLURM.
That's consistent with my initial guess when you first hit the error with %20 that your cluster isn't checking the memory request. This is also the source of the JVM error and may actually be why you hit the I/O issues too, it over-scheduled the jobs before too causing higher I/O load.
You can work around this by setting gridOptionsOVS="--cpus-per-task=16 --mem-per-cpu=2500m
which should reserve 16 cores and limit you to three jobs. In general, any job using more than 3gb/core (so bucketizing, sorting) would run into this issue on your cluster.
I put this parameter in canu command and restarted. Now I see that canu made core.XXXX files and 2-correction, lculinaris.corStore.WORKING new directory.
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/canu
Canu 1.9
-- Canu 1.9
--
-- CITATIONS
--
-- Koren S, Walenz BP, Berlin K, Miller JR, Phillippy AM.
-- Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation.
-- Genome Res. 2017 May;27(5):722-736.
-- http://doi.org/10.1101/gr.215087.116
--
-- Koren S, Rhie A, Walenz BP, Dilthey AT, Bickhart DM, Kingan SB, Hiendleder S, Williams JL, Smith TPL, Phillippy AM.
-- De novo assembly of haplotype-resolved genomes with trio binning.
-- Nat Biotechnol. 2018
-- https//doi.org/10.1038/nbt.4277
--
-- Read and contig alignments during correction, consensus and GFA building use:
-- Šošic M, Šikic M.
-- Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance.
-- Bioinformatics. 2017 May 1;33(9):1394-1395.
-- http://doi.org/10.1093/bioinformatics/btw753
--
-- Overlaps are generated using:
-- Berlin K, et al.
-- Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.
-- Nat Biotechnol. 2015 Jun;33(6):623-30.
-- http://doi.org/10.1038/nbt.3238
--
-- Myers EW, et al.
-- A Whole-Genome Assembly of Drosophila.
-- Science. 2000 Mar 24;287(5461):2196-204.
-- http://doi.org/10.1126/science.287.5461.2196
--
-- Corrected read consensus sequences are generated using an algorithm derived from FALCON-sense:
-- Chin CS, et al.
-- Phased diploid genome assembly with single-molecule real-time sequencing.
-- Nat Methods. 2016 Dec;13(12):1050-1054.
-- http://doi.org/10.1038/nmeth.4035
--
-- Contig consensus sequences are generated using an algorithm derived from pbdagcon:
-- Chin CS, et al.
-- Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data.
-- Nat Methods. 2013 Jun;10(6):563-9
-- http://doi.org/10.1038/nmeth.2474
--
-- CONFIGURE CANU
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ------ -------- -----------------------------
-- Grid: meryl 24 GB 8 CPUs (k-mer counting)
-- Grid: hap 16 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24 GB 16 CPUs (overlap detection)
-- Grid: cor 24 GB 4 CPUs (read correction)
-- Grid: ovb 4 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32 GB 1 CPU (overlap store sorting)
-- Grid: red 41 GB 8 CPUs (read error detection)
-- Grid: oea 8 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124 GB 48 CPUs (contig construction with bogart)
-- Grid: cns --- GB 8 CPUs (consensus)
-- Grid: gfa 64 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio reads:
-- Raw: 16602048
-- Corrected: 0
-- Trimmed: 0
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_corrected/lculinaris'
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
--
-- Creating overlap store correction/lculinaris.ovlStore using:
-- 655 buckets
-- 655 slices
-- using at most 29 GB memory each
-- Overlap store sorter finished.
-- No change in report.
-- Finished stage 'cor-overlapStoreSorterCheck', reset canuIteration.
----------------------------------------
-- Starting command on Mon May 25 12:36:14 2020 with 178218.65 GB free disk space
cd correction
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/ovStoreIndexer \
-O ./lculinaris.ovlStore.BUILDING \
-S ../lculinaris.seqStore \
-C ./lculinaris.ovlStore.config \
-delete \
> ./lculinaris.ovlStore.BUILDING.index.err 2>&1
-- Finished on Mon May 25 13:10:56 2020 (2082 seconds) with 178631.055 GB free disk space
----------------------------------------
-- Checking store.
----------------------------------------
-- Starting command on Mon May 25 13:10:56 2020 with 178631.055 GB free disk space
cd correction
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/ovStoreDump \
-S ../lculinaris.seqStore \
-O ./lculinaris.ovlStore \
-counts \
> ./lculinaris.ovlStore/counts.dat 2> ./lculinaris.ovlStore/counts.err
-- Finished on Mon May 25 13:11:05 2020 (9 seconds) with 178630.841 GB free disk space
----------------------------------------
--
-- Overlap store 'correction/lculinaris.ovlStore' successfully constructed.
-- Found 811016098412 overlaps for 16585800 reads; 16248 reads have no overlaps.
--
--
-- Purged 7900.121 GB in 1674 overlap output files.
-- No change in report.
-- Finished stage 'cor-createOverlapStore', reset canuIteration.
-- Set corMinCoverage=4 based on read coverage of 32.
-- Global filter scores will be estimated.
-- Computing correction layouts.
----------------------------------------
-- Starting command on Mon May 25 13:17:00 2020 with 186531.187 GB free disk space
cd correction
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/generateCorrectionLayouts \
-S ../lculinaris.seqStore \
-O ./lculinaris.ovlStore \
-C ./lculinaris.corStore.WORKING \
-eC 80 \
> ./lculinaris.corStore.err 2>&1
This was the last output report. I think that canu is at "read correction" stage where it should use 4 CPU core and 48 GB ram. But now canu use only 1 CPU and run only one process on one node. This is normal?
This is the last script which canu run on the SLURM.
#!/bin/sh
# Path to Canu.
syst=`uname -s`
arch=`uname -m | sed s/x86_64/amd64/`
bin="/mnt/stori/home/fk8jybr/canu-1.9/$syst-$arch/bin"
if [ ! -d "$bin" ] ; then
bin="/mnt/stori/home/fk8jybr/canu-1.9"
fi
# Report paths.
echo ""
echo "Found perl:"
echo " " `which perl`
echo " " `perl --version | grep version`
echo ""
echo "Found java:"
echo " " `which java`
echo " " `java -showversion 2>&1 | head -n 1`
echo ""
echo "Found canu:"
echo " " $bin/canu
echo " " `$bin/canu -version`
echo ""
# Environment for any object storage.
export CANU_OBJECT_STORE_CLIENT=
export CANU_OBJECT_STORE_CLIENT_UA=
export CANU_OBJECT_STORE_CLIENT_DA=
export CANU_OBJECT_STORE_NAMESPACE=
export CANU_OBJECT_STORE_PROJECT=
rm -f canu.out
ln -s canu-scripts/canu.26.out canu.out
/usr/bin/env perl \
$bin/canu -correct -p 'lculinaris' 'genomeSize=4.0g' 'batMemory=124g' 'batThreads=48' 'gridOptions=--time=7-00:00:00 --partition=prod --account denolen' 'gridEngineArrayOption=-a ARRAY_JOBS%20' 'gridOptionsOVS=--cpus-per-task=16 --mem-per-cpu=2500m' -pacbio-raw '/mnt/stori/home/fk8jybr/input/pacbio_raw/LC001pacbio.fastq.gz' canuIteration=1
Yep, that's correct, it's not always going to use all cores, some steps are single-threaded.
canu finished the correction process.
This is the final report file:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/canu
Canu 1.9
-- Canu 1.9
--
-- CITATIONS
--
-- Koren S, Walenz BP, Berlin K, Miller JR, Phillippy AM.
-- Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation.
-- Genome Res. 2017 May;27(5):722-736.
-- http://doi.org/10.1101/gr.215087.116
--
-- Koren S, Rhie A, Walenz BP, Dilthey AT, Bickhart DM, Kingan SB, Hiendleder S, Williams JL, Smith TPL, Phillippy AM.
-- De novo assembly of haplotype-resolved genomes with trio binning.
-- Nat Biotechnol. 2018
-- https//doi.org/10.1038/nbt.4277
--
-- Read and contig alignments during correction, consensus and GFA building use:
-- Šošic M, Šikic M.
-- Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance.
-- Bioinformatics. 2017 May 1;33(9):1394-1395.
-- http://doi.org/10.1093/bioinformatics/btw753
--
-- Overlaps are generated using:
-- Berlin K, et al.
-- Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.
-- Nat Biotechnol. 2015 Jun;33(6):623-30.
-- http://doi.org/10.1038/nbt.3238
--
-- Myers EW, et al.
-- A Whole-Genome Assembly of Drosophila.
-- Science. 2000 Mar 24;287(5461):2196-204.
-- http://doi.org/10.1126/science.287.5461.2196
--
-- Corrected read consensus sequences are generated using an algorithm derived from FALCON-sense:
-- Chin CS, et al.
-- Phased diploid genome assembly with single-molecule real-time sequencing.
-- Nat Methods. 2016 Dec;13(12):1050-1054.
-- http://doi.org/10.1038/nmeth.4035
--
-- Contig consensus sequences are generated using an algorithm derived from pbdagcon:
-- Chin CS, et al.
-- Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data.
-- Nat Methods. 2013 Jun;10(6):563-9
-- http://doi.org/10.1038/nmeth.2474
--
-- CONFIGURE CANU
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ------ -------- -----------------------------
-- Grid: meryl 24 GB 8 CPUs (k-mer counting)
-- Grid: hap 16 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24 GB 16 CPUs (overlap detection)
-- Grid: cor 24 GB 4 CPUs (read correction)
-- Grid: ovb 4 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32 GB 1 CPU (overlap store sorting)
-- Grid: red 41 GB 8 CPUs (read error detection)
-- Grid: oea 8 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124 GB 48 CPUs (contig construction with bogart)
-- Grid: cns --- GB 8 CPUs (consensus)
-- Grid: gfa 64 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio reads:
-- Raw: 16602048
-- Corrected: 0
-- Trimmed: 0
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_corrected/lculinaris'
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN CORRECTION
--
-- No change in report.
-- Set corMinCoverage=4 based on read coverage of 32.
-- Found 256 read correction output files.
-- No change in report.
-- Finished stage 'cor-generateCorrectedReadsCheck', reset canuIteration.
-- Found 256 read correction output files.
-- No change in report.
-- Finished stage 'cor-generateCorrectedReadsCheck', reset canuIteration.
-- Found 256 read correction output files.
-- No change in report.
-- Finished stage 'cor-generateCorrectedReadsCheck', reset canuIteration.
--
-- Loading corrected reads into corStore and seqStore.
----------------------------------------
-- Starting command on Sat May 30 19:52:24 2020 with 186184.395 GB free disk space
cd correction
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/loadCorrectedReads \
-S ../lculinaris.seqStore \
-C ./lculinaris.corStore \
-L ./2-correction/corjob.files \
> ./lculinaris.loadCorrectedReads.log \
2> ./lculinaris.loadCorrectedReads.err
-- Finished on Sat May 30 20:24:49 2020 (1945 seconds) with 186131.596 GB free disk space
----------------------------------------
--
-- WARNING: gnuplot failed.
--
----------------------------------------
--
-- In sequence store './lculinaris.seqStore':
-- Found 13537706 reads.
-- Found 102118358098 bases (25.52 times coverage).
--
-- Read length histogram (one '*' equals 122729.3 reads):
-- 0 4999 2684392 *********************
-- 5000 9999 8591051 **********************************************************************
-- 10000 14999 1824672 **************
-- 15000 19999 332581 **
-- 20000 24999 74856
-- 25000 29999 17240
-- 30000 34999 4962
-- 35000 39999 1935
-- 40000 44999 1088
-- 45000 49999 847
-- 50000 54999 706
-- 55000 59999 522
-- 60000 64999 397
-- 65000 69999 406
-- 70000 74999 360
-- 75000 79999 328
-- 80000 84999 267
-- 85000 89999 245
-- 90000 94999 186
-- 95000 99999 148
-- 100000 104999 142
-- 105000 109999 100
-- 110000 114999 71
-- 115000 119999 62
-- 120000 124999 44
-- 125000 129999 29
-- 130000 134999 18
-- 135000 139999 10
-- 140000 144999 8
-- 145000 149999 8
-- 150000 154999 5
-- 155000 159999 3
-- 160000 164999 5
-- 165000 169999 3
-- 170000 174999 2
-- 175000 179999 3
-- 180000 184999 2
-- 185000 189999 0
-- 190000 194999 0
-- 195000 199999 0
-- 200000 204999 2
--
-- Purging correctReads output after loading into stores.
-- Purged 256 .cns outputs.
-- Purged 256 .out job log outputs.
--
-- Purging overlaps used for correction.
-- Report changed.
-- Finished stage 'cor-loadCorrectedReads', reset canuIteration.
----------------------------------------
-- Starting command on Sat May 30 20:36:36 2020 with 201466.761 GB free disk space
cd .
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/sqStoreDumpFASTQ \
-corrected \
-S ./lculinaris.seqStore \
-o ./lculinaris.correctedReads.gz \
-fasta \
-nolibname \
> lculinaris.correctedReads.fasta.err 2>&1
-- Finished on Sat May 30 22:02:36 2020 (5160 seconds) with 201435.818 GB free disk space
----------------------------------------
--
-- Corrected reads saved in 'lculinaris.correctedReads.fasta.gz'.
-- No change in report.
-- Finished stage 'cor-dumpCorrectedReads', reset canuIteration.
--
-- Bye.
Now I would like to start the trim process in canu. This is the planned parameters which I would like to use in canu:
canu -trim -p lculinaris -d $HOME/output/canu_trim/lculinaris genomeSize=4.0g batMemory=124g batThreads=48 gridOptions="--time=7-00:00:00 --partition=prod --account XXXX" gridEngineArrayOption="-a ARRAY_JOBS%20" gridOptionsOVS="--cpus-per-task=16 --mem-per-cpu=2500m" -pacbio-corrected $HOME/output/canu_corrected/lculinaris/lculinaris.correctedReads.fasta.gz
Is this parameters good for canu? Should I use ARRAY_JOBS%20" gridOptionsOVS="--cpus-per-task=16 --mem-per-cpu=2500m
?
@skoren
I run canu -trim with this parameters:
canu -trim -p lculinaris -d $HOME/output/canu_trim/lculinaris genomeSize=4.0g batMemory=124g batThreads=48 gridOptions="--time=7-00:00:00 --partition=prod --account XXXX" gridEngineArrayOption="-a ARRAY_JOBS%20" gridOptionsOVS="--cpus-per-task=16 --mem-per-cpu=2500m" -pacbio-corrected $HOME/output/canu_corrected/lculinaris/lculinaris.correctedReads.fasta.gz
I get this error message from canu:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/canu
Canu 1.9
-- Canu 1.9
--
-- CITATIONS
--
-- Koren S, Walenz BP, Berlin K, Miller JR, Phillippy AM.
-- Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation.
-- Genome Res. 2017 May;27(5):722-736.
-- http://doi.org/10.1101/gr.215087.116
--
-- Koren S, Rhie A, Walenz BP, Dilthey AT, Bickhart DM, Kingan SB, Hiendleder S, Williams JL, Smith TPL, Phillippy AM.
-- De novo assembly of haplotype-resolved genomes with trio binning.
-- Nat Biotechnol. 2018
-- https//doi.org/10.1038/nbt.4277
--
-- Read and contig alignments during correction, consensus and GFA building use:
-- Šošic M, Šikic M.
-- Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance.
-- Bioinformatics. 2017 May 1;33(9):1394-1395.
-- http://doi.org/10.1093/bioinformatics/btw753
--
-- Overlaps are generated using:
-- Berlin K, et al.
-- Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.
-- Nat Biotechnol. 2015 Jun;33(6):623-30.
-- http://doi.org/10.1038/nbt.3238
--
-- Myers EW, et al.
-- A Whole-Genome Assembly of Drosophila.
-- Science. 2000 Mar 24;287(5461):2196-204.
-- http://doi.org/10.1126/science.287.5461.2196
--
-- Corrected read consensus sequences are generated using an algorithm derived from FALCON-sense:
-- Chin CS, et al.
-- Phased diploid genome assembly with single-molecule real-time sequencing.
-- Nat Methods. 2016 Dec;13(12):1050-1054.
-- http://doi.org/10.1038/nmeth.4035
--
-- Contig consensus sequences are generated using an algorithm derived from pbdagcon:
-- Chin CS, et al.
-- Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data.
-- Nat Methods. 2013 Jun;10(6):563-9
-- http://doi.org/10.1038/nmeth.2474
--
-- CONFIGURE CANU
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ------ -------- -----------------------------
-- Grid: meryl 24 GB 8 CPUs (k-mer counting)
-- Grid: hap 16 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24 GB 16 CPUs (overlap detection)
-- Grid: cor 24 GB 4 CPUs (read correction)
-- Grid: ovb 4 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32 GB 1 CPU (overlap store sorting)
-- Grid: red 41 GB 8 CPUs (read error detection)
-- Grid: oea 8 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124 GB 48 CPUs (contig construction with bogart)
-- Grid: cns --- GB 8 CPUs (consensus)
-- Grid: gfa 64 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio reads:
-- Raw: 0
-- Corrected: 13239306
-- Trimmed: 0
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_trim/lculinaris'
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN TRIMMING
--
-- No change in report.
--
-- Creating overlap store trimming/lculinaris.ovlStore using:
-- 140 buckets
-- 140 slices
-- using at most 29 GB memory each
-- Overlap store sorter finished.
-- No change in report.
-- Finished stage 'obt-overlapStoreSorterCheck', reset canuIteration.
----------------------------------------
-- Starting command on Fri Jun 5 12:00:34 2020 with 196332.982 GB free disk space
cd trimming
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/ovStoreIndexer \
-O ./lculinaris.ovlStore.BUILDING \
-S ../lculinaris.seqStore \
-C ./lculinaris.ovlStore.config \
-delete \
> ./lculinaris.ovlStore.BUILDING.index.err 2>&1
-- Finished on Fri Jun 5 12:00:37 2020 (3 seconds) with 196332.982 GB free disk space
----------------------------------------
ERROR:
ERROR: Failed with exit code 1. (rc=256)
ERROR:
ABORT:
ABORT: Canu 1.9
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
ABORT: failed to build index for overlap store.
ABORT:
ABORT: Disk space available: 196332.982 GB
ABORT:
ABORT: Last 50 lines of the relevant log file (trimming/lculinaris.ovlStore.BUILDING.index.err):
ABORT:
ABORT: - 96 9022347 9117031 13239306 1238038855
ABORT: - 97 9117032 9213921 13239306 1238028613
ABORT: - 98 9213922 9312559 13239306 1238022755
ABORT: - 99 9312560 9410058 13239306 1238040868
ABORT: - 100 9410059 9505434 13239306 1238003136
ABORT: - 101 9505435 9602855 13239306 1238033880
ABORT: - 102 9602856 9701542 13239306 1238034193
ABORT: - 103 9701543 9798790 13239306 1238038333
ABORT: - 104 9798791 9897882 13239306 1238041623
ABORT: - 105 9897883 9995235 13239306 1238039975
ABORT: - 106 9995236 10090533 13239306 1238043784
ABORT: - 107 10090534 10185681 13239306 1238016973
ABORT: - 108 10185682 10284408 13239306 1238040236
ABORT: - 109 10284409 10383953 13239306 1238043761
ABORT: - 110 10383954 10483605 13239306 1238011557
ABORT: - 111 10483606 10580623 13239306 1238038894
ABORT: - 112 10580624 10680239 13239306 1238037228
ABORT: - 113 10680240 10775880 13239306 1238008865
ABORT: - 114 10775881 10862941 13239306 1238033983
ABORT: - 115 10862942 10951310 13239306 1238028646
ABORT: - 116 10951311 11037978 13239306 1238032078
ABORT: - 117 11037979 11125930 13239306 1238012796
ABORT: - 118 11125931 11211879 13239306 1238042048
ABORT: - 119 11211880 11301488 13239306 1237993580
ABORT: - 120 11301489 11389528 13239306 1237999527
ABORT: - 121 11389529 11479453 13239306 1238020347
ABORT: - 122 11479454 11569927 13239306 1238021084
ABORT: - 123 11569928 11658469 13239306 1238025566
ABORT: - 124 11658470 11749014 13239306 1238015219
ABORT: - 125 11749015 11838420 13239306 1238042162
ABORT: - 126 11838421 11930016 13239306 1238043721
ABORT: - 127 11930017 12023443 13239306 1238009956
ABORT: - 128 12023444 12121824 13239306 1238027058
ABORT: - 129 12121825 12219155 13239306 1238020453
ABORT: - 130 12219156 12317534 13239306 1238017301
ABORT: - 131 12317535 12415718 13239306 1238035668
ABORT: - 132 12415719 12513002 13239306 1238019383
ABORT: - 133 12513003 12609640 13239306 1238019439
ABORT: - 134 12609641 12709310 13239306 1238035664
ABORT: - 135 12709311 12809528 13239306 1238031891
ABORT: - 136 12809529 12908112 13239306 1237970502
ABORT: - 137 12908113 13006608 13239306 1238041020
ABORT: - 138 13006609 13104672 13239306 1238039725
ABORT: - 139 13104673 13204611 13239306 1238038743
ABORT: - 140 13204612 13239306 13239306 443463959
ABORT: - ----- --------- --------- -------- ----------
ABORT: -
ABORT: - Merging indexes.
ABORT: -
ABORT: AS_UTL_loadFile()-- File './lculinaris.ovlStore.BUILDING/0004.index' contains 9611946 objects, but asked to load 13239307.
ABORT:
How could I resolve this error?
This is again an I/O error, it looks like there are truncated files in your run. You can try removing the BUILDING folder and restarting Canu limiting the concurrency even more.
Thank @skoren for your answer and help. I deleted the BUILDING folder and restarted with gridEngineArrayOption="-a ARRAY_JOBS%5"
parameter. I hope this will resolve the problem.
Dear @skoren ! canu -trim is finished succesfully. This is the report:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/canu
Canu 1.9
-- Canu 1.9
--
-- CITATIONS
--
-- Koren S, Walenz BP, Berlin K, Miller JR, Phillippy AM.
-- Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation.
-- Genome Res. 2017 May;27(5):722-736.
-- http://doi.org/10.1101/gr.215087.116
--
-- Koren S, Rhie A, Walenz BP, Dilthey AT, Bickhart DM, Kingan SB, Hiendleder S, Williams JL, Smith TPL, Phillippy AM.
-- De novo assembly of haplotype-resolved genomes with trio binning.
-- Nat Biotechnol. 2018
-- https//doi.org/10.1038/nbt.4277
--
-- Read and contig alignments during correction, consensus and GFA building use:
-- Šošic M, Šikic M.
-- Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance.
-- Bioinformatics. 2017 May 1;33(9):1394-1395.
-- http://doi.org/10.1093/bioinformatics/btw753
--
-- Overlaps are generated using:
-- Berlin K, et al.
-- Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.
-- Nat Biotechnol. 2015 Jun;33(6):623-30.
-- http://doi.org/10.1038/nbt.3238
--
-- Myers EW, et al.
-- A Whole-Genome Assembly of Drosophila.
-- Science. 2000 Mar 24;287(5461):2196-204.
-- http://doi.org/10.1126/science.287.5461.2196
--
-- Corrected read consensus sequences are generated using an algorithm derived from FALCON-sense:
-- Chin CS, et al.
-- Phased diploid genome assembly with single-molecule real-time sequencing.
-- Nat Methods. 2016 Dec;13(12):1050-1054.
-- http://doi.org/10.1038/nmeth.4035
--
-- Contig consensus sequences are generated using an algorithm derived from pbdagcon:
-- Chin CS, et al.
-- Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data.
-- Nat Methods. 2013 Jun;10(6):563-9
-- http://doi.org/10.1038/nmeth.2474
--
-- CONFIGURE CANU
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ------ -------- -----------------------------
-- Grid: meryl 24 GB 8 CPUs (k-mer counting)
-- Grid: hap 16 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24 GB 16 CPUs (overlap detection)
-- Grid: cor 24 GB 4 CPUs (read correction)
-- Grid: ovb 4 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32 GB 1 CPU (overlap store sorting)
-- Grid: red 41 GB 8 CPUs (read error detection)
-- Grid: oea 8 GB 1 CPU (overlap error adjustment)
-- Grid: bat 124 GB 48 CPUs (contig construction with bogart)
-- Grid: cns --- GB 8 CPUs (consensus)
-- Grid: gfa 64 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio reads:
-- Raw: 0
-- Corrected: 13239306
-- Trimmed: 0
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_trim/lculinaris'
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN TRIMMING
--
-- No change in report.
--
-- Creating overlap store trimming/lculinaris.ovlStore using:
-- 140 buckets
-- 140 slices
-- using at most 29 GB memory each
-- Overlap store sorter finished.
-- No change in report.
-- Finished stage 'obt-overlapStoreSorterCheck', reset canuIteration.
----------------------------------------
-- Starting command on Sat Jun 6 16:26:40 2020 with 196331.682 GB free disk space
cd trimming
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/ovStoreIndexer \
-O ./lculinaris.ovlStore.BUILDING \
-S ../lculinaris.seqStore \
-C ./lculinaris.ovlStore.config \
-delete \
> ./lculinaris.ovlStore.BUILDING.index.err 2>&1
-- Finished on Sat Jun 6 16:29:20 2020 (160 seconds) with 196379.079 GB free disk space
----------------------------------------
-- Checking store.
----------------------------------------
-- Starting command on Sat Jun 6 16:29:20 2020 with 196379.079 GB free disk space
cd trimming
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/ovStoreDump \
-S ../lculinaris.seqStore \
-O ./lculinaris.ovlStore \
-counts \
> ./lculinaris.ovlStore/counts.dat 2> ./lculinaris.ovlStore/counts.err
-- Finished on Sat Jun 6 16:29:26 2020 (6 seconds) with 196378.918 GB free disk space
----------------------------------------
--
-- Overlap store 'trimming/lculinaris.ovlStore' successfully constructed.
-- Found 172528613832 overlaps for 13194326 reads; 44980 reads have no overlaps.
--
--
-- Purged 1601.381 GB in 1479 overlap output files.
-- No change in report.
-- Finished stage 'obt-createOverlapStore', reset canuIteration.
----------------------------------------
-- Starting command on Sat Jun 6 16:30:17 2020 with 197980.467 GB free disk space
cd trimming/3-overlapbasedtrimming
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/trimReads \
-S ../../lculinaris.seqStore \
-O ../lculinaris.ovlStore \
-Co ./lculinaris.1.trimReads.clear \
-e 0.045 \
-minlength 1000 \
-ol 500 \
-oc 2 \
-o ./lculinaris.1.trimReads \
> ./lculinaris.1.trimReads.err 2>&1
-- Finished on Sun Jun 7 10:24:53 2020 (64476 seconds, like watching paint dry) with 197979.398 GB free disk space
----------------------------------------
-- PARAMETERS:
-- ----------
-- 1000 (reads trimmed below this many bases are deleted)
-- 0.0450 (use overlaps at or below this fraction error)
-- 500 (break region if overlap is less than this long, for 'largest covered' algorithm)
-- 2 (break region if overlap coverage is less than this many reads, for 'largest covered' algorithm)
--
-- INPUT READS:
-- -----------
-- 13239306 reads 101922713657 bases (reads processed)
-- 0 reads 0 bases (reads not processed, previously deleted)
-- 0 reads 0 bases (reads not processed, in a library where trimming isn't allowed)
--
-- OUTPUT READS:
-- ------------
-- 8675204 reads 64449313446 bases (trimmed reads output)
-- 4391617 reads 34400198870 bases (reads with no change, kept as is)
-- 44980 reads 202296459 bases (reads with no overlaps, deleted)
-- 127505 reads 239054913 bases (reads with short trimmed length, deleted)
--
-- TRIMMING DETAILS:
-- ----------------
-- 5468905 reads 1357056015 bases (bases trimmed from the 5' end of a read)
-- 6167745 reads 1274793954 bases (bases trimmed from the 3' end of a read)
-- Report changed.
-- Finished stage 'obt-trimReads', reset canuIteration.
----------------------------------------
-- Starting command on Sun Jun 7 10:24:53 2020 with 197979.398 GB free disk space
cd trimming/3-overlapbasedtrimming
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/splitReads \
-S ../../lculinaris.seqStore \
-O ../lculinaris.ovlStore \
-Ci ./lculinaris.1.trimReads.clear \
-Co ./lculinaris.2.splitReads.clear \
-e 0.045 \
-minlength 1000 \
-o ./lculinaris.2.splitReads \
> ./lculinaris.2.splitReads.err 2>&1
-- Finished on Wed Jun 10 04:49:14 2020 (239061 seconds, at least I didn't crash) with 197978.779 GB free disk space
----------------------------------------
-- PARAMETERS:
-- ----------
-- 1000 (reads trimmed below this many bases are deleted)
-- 0.0450 (use overlaps at or below this fraction error)
-- INPUT READS:
-- -----------
-- 13066821 reads 101481362285 bases (reads processed)
-- 172485 reads 441351372 bases (reads not processed, previously deleted)
-- 0 reads 0 bases (reads not processed, in a library where trimming isn't allowed)
--
-- PROCESSED:
-- --------
-- 0 reads 0 bases (no overlaps)
-- 411 reads 1116341 bases (no coverage after adjusting for trimming done already)
-- 0 reads 0 bases (processed for chimera)
-- 0 reads 0 bases (processed for spur)
-- 13066410 reads 101480245944 bases (processed for subreads)
--
-- READS WITH SIGNALS:
-- ------------------
-- 0 reads 0 signals (number of 5' spur signal)
-- 0 reads 0 signals (number of 3' spur signal)
-- 0 reads 0 signals (number of chimera signal)
-- 111435 reads 111996 signals (number of subread signal)
--
-- SIGNALS:
-- -------
-- 0 reads 0 bases (size of 5' spur signal)
-- 0 reads 0 bases (size of 3' spur signal)
-- 0 reads 0 bases (size of chimera signal)
-- 111996 reads 43559760 bases (size of subread signal)
--
-- TRIMMING:
-- --------
-- 60001 reads 437016237 bases (trimmed from the 5' end of the read)
-- 51465 reads 373583750 bases (trimmed from the 3' end of the read)
-- Report changed.
-- Finished stage 'obt-splitReads', reset canuIteration.
----------------------------------------
-- Starting command on Wed Jun 10 04:49:14 2020 with 197978.779 GB free disk space
cd trimming/3-overlapbasedtrimming
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/loadTrimmedReads \
-S ../../lculinaris.seqStore \
-c ./lculinaris.2.splitReads.clear \
> ./lculinaris.loadtrimmedReads.err 2>&1
-- Finished on Wed Jun 10 04:49:18 2020 (4 seconds) with 197978.286 GB free disk space
----------------------------------------
--
-- WARNING: gnuplot failed.
--
----------------------------------------
--
-- In sequence store './lculinaris.seqStore':
-- Found 13066705 reads.
-- Found 98038813470 bases (24.5 times coverage).
--
-- Read length histogram (one '*' equals 122186.22 reads):
-- 0 4999 2468173 ********************
-- 5000 9999 8553036 **********************************************************************
-- 10000 14999 1717798 **************
-- 15000 19999 260729 **
-- 20000 24999 51869
-- 25000 29999 10731
-- 30000 34999 2004
-- 35000 39999 659
-- 40000 44999 380
-- 45000 49999 268
-- 50000 54999 225
-- 55000 59999 187
-- 60000 64999 140
-- 65000 69999 132
-- 70000 74999 88
-- 75000 79999 77
-- 80000 84999 69
-- 85000 89999 40
-- 90000 94999 31
-- 95000 99999 11
-- 100000 104999 22
-- 105000 109999 13
-- 110000 114999 8
-- 115000 119999 6
-- 120000 124999 3
-- 125000 129999 2
-- 130000 134999 2
-- 135000 139999 0
-- 140000 144999 1
-- 145000 149999 0
-- 150000 154999 0
-- 155000 159999 0
-- 160000 164999 0
-- 165000 169999 0
-- 170000 174999 0
-- 175000 179999 1
--
-- Purging overlaps used for trimming.
-- Report changed.
-- Finished stage 'obt-dumpReads', reset canuIteration.
----------------------------------------
-- Starting command on Wed Jun 10 04:54:46 2020 with 201192.927 GB free disk space
cd .
/mnt/stori/home/fk8jybr/canu-1.9/Linux-amd64/bin/sqStoreDumpFASTQ \
-trimmed \
-S ./lculinaris.seqStore \
-o ./lculinaris.trimmedReads.gz \
-fasta \
-nolibname \
> ./lculinaris.trimmedReads.fasta.err 2>&1
-- Finished on Wed Jun 10 06:05:21 2020 (4236 seconds) with 201163.05 GB free disk space
----------------------------------------
--
-- Trimmed reads saved in 'lculinaris.trimmedReads.fasta.gz'.
-- No change in report.
-- Finished stage 'cor-dumpTrimmedReads', reset canuIteration.
--
-- Bye.
I would like to continue with the assemble porcess. This is the planned parameters:
canu -assemble -p lculinaris -d lculinaris genomeSize=4g batMemory=124g batThreads=48 gridOptions="--time=7-00:00:00 --partition=prod --account XXXX" gridEngineArrayOption="-a ARRAY_JOBS%5" gridOptionsOVS="--cpus-per-task=16 --mem-per-cpu=2500m" correctedErrorRate=0.085 corMhapFilterThreshold=0.0000000002 corMhapOptions="--threshold 0.80 --num-hashes 512 --num-min-matches 3 --ordered-sketch-size 1000 --ordered-kmer-size 14 --min-olap-length 2000 --repeat-idf-scale 50" mhapMemory=60g mhapBlockSize=500 ovlMerDistinct=0.975 -pacbio-corrected lculinaris.trimmedReads.fasta.gz
This parameters are good or enough for the canu assembly? Should I change something in the parameters?
The parameters look OK, all the options related to mhap are unnecessary as they are not used for the trimming/unitigging steps. I'd also suggest updating to v2.0 if you can and just run the same command with the trimmed reads since there have been a bunch of fixes and improvements from version 1.9 to 2.0.
Thank you for your answer. I updated canu v1.9 to v2.0 and deleted the mhap related options.
Dear @skoren
I get an error at contig construction with bogart.
This was the error message:
-- Canu 2.0
--
-- Detected Java(TM) Runtime Environment '1.8.0_242' (from 'java') with -d64 support.
--
-- WARNING:
-- WARNING: Failed to run gnuplot using command 'gnuplot'.
-- WARNING: Plots will be disabled.
-- WARNING:
--
-- Detected 48 CPUs and 126 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Detected Slurm with task IDs up to 511 allowed.
--
-- Found 2 hosts with 24 cores and 22 GB memory under Slurm control.
-- Found 48 hosts with 48 cores and 124 GB memory under Slurm control.
--
-- (tag)Threads
-- (tag)Memory |
-- (tag) | | algorithm
-- ------- ---------- -------- -----------------------------
-- Grid: meryl 24.000 GB 8 CPUs (k-mer counting)
-- Grid: hap 16.000 GB 24 CPUs (read-to-haplotype assignment)
-- Grid: cormhap 22.000 GB 16 CPUs (overlap detection with mhap)
-- Grid: obtovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: utgovl 24.000 GB 16 CPUs (overlap detection)
-- Grid: cor 24.000 GB 4 CPUs (read correction)
-- Grid: ovb 4.000 GB 1 CPU (overlap store bucketizer)
-- Grid: ovs 32.000 GB 1 CPU (overlap store sorting)
-- Grid: red 41.000 GB 8 CPUs (read error detection)
-- Grid: oea 8.000 GB 1 CPU (overlap error adjustment)
-- Grid: bat 120.000 GB 48 CPUs (contig construction with bogart)
-- Grid: cns -.--- GB 8 CPUs (consensus)
-- Grid: gfa 64.000 GB 32 CPUs (GFA alignment and processing)
--
-- In 'lculinaris.seqStore', found PacBio CLR reads:
-- PacBio CLR: 1
--
-- Corrected: 1
-- Corrected and Trimmed: 1
--
-- Generating assembly 'lculinaris' in '/mnt/stori/home/fk8jybr/output/canu_assemble/lculinaris':
-- - assemble corrected and trimmed reads.
--
-- Parameters:
--
-- genomeSize 4000000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0850 ( 8.50%)
-- utgOvlErrorRate 0.0850 ( 8.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0850 ( 8.50%)
-- utgErrorRate 0.0850 ( 8.50%)
-- cnsErrorRate 0.0850 ( 8.50%)
--
--
-- BEGIN ASSEMBLY
--
-- No change in report.
-- No change in report.
--
-- Bogart failed, retry
--
-- No change in report.
--
-- Running jobs. Second attempt out of 2.
--
-- Failed to submit compute jobs. Delay 10 seconds and try again.
CRASH:
CRASH: Canu 2.0
CRASH: Please panic, this is abnormal.
CRASH:
CRASH: Failed to submit compute jobs.
CRASH:
CRASH: Failed at /mnt/stori/home/fk8jybr/canu-2.0/Linux-amd64/bin/../lib/site_perl/canu/Execution.pm line 1275.
CRASH: canu::Execution::submitOrRunParallelJob('lculinaris', 'bat', 'unitigging/4-unitigger', 'unitigger', 1) called at /mnt/stori/home/fk8jybr/canu-2.0/Linux-amd64/bin/../lib/site_perl/canu/Unitig.pm line 350
CRASH: canu::Unitig::unitigCheck('lculinaris') called at /mnt/stori/home/fk8jybr/canu-2.0/Linux-amd64/bin/canu line 1069
CRASH:
CRASH: Last 50 lines of the relevant log file (unitigging/4-unitigger/unitigger.jobSubmit-01.out):
CRASH:
CRASH: sbatch: error: Batch job submission failed: Requested node configuration is not available
CRASH:
I checked the sh file:
#!/bin/sh
sbatch \
--cpus-per-task=48 --mem-per-cpu=3200m --time=7-00:00:00 --partition=prod --account denolen -o unitigger.%A_%a.out \
-D `pwd` -J "bat_lculinaris" \
-a 1-1%5 \
`pwd`/unitigger.sh 0 \
> ./unitigger.jobSubmit-01.out 2>&1
canu try allocate 48 CPU and 3200 MB memory/CPU. This HPC which I can use, only allow allocate maximum 2600 MB memory/CPU and one node has got maximum 48 CPU, so tha maximum memory is 124800 MB/node. How could I add a paramater to canu for allocate maximum 2600 MB memory/CPU and maximum 48 CPU/node?
The memory isn't an issue here, the error is that before you had:
-- Bogart failed, retry
--
-- No change in report.
--
-- Running jobs. Second attempt out of 2.
so it already failed once, post the log from the failed run (*.err and *.out
in unitigging/4-unitigger folder).
Thank @skoren for your answer.
This is the *.out file:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-2.0/Linux-amd64/bin/canu
Canu 2.0
Running job 1 based on SLURM_ARRAY_TASK_ID=1 and offset=0.
/var/spool/slurmd/job423491/slurm_script: line 128: 4444 Killed $bin/bogart -S ../../lculinaris.seqStore -O ../lculinaris.ovlStore -o ./lculinaris -gs 4000000000 -eg 0.085 -eM 0.085 -mo 500 -covgapolap 500 -lopsided nobest 50 -minolappercent 0.0 -dg 12 -db 12 -dr 6 -ca 2100 -cp 200 -threads 48 -M 120 -unassembled 2 0 1.0 0.5 3 > ./unitigger.err 2>&1
bogart appears to have failed. No lculinaris.ctgStore or lculinaris.utgStore found.
This is the *.err file:
COVGAPOLAP 500
LOPSIDED NOBEST 50
MINOLAPPERCENT 0.000000
==> PARAMETERS.
Resources:
Memory 120 GB
Compute Threads 48 (command line)
Lengths:
Minimum read 0 bases
Minimum overlap 500 bases
Overlap Error Rates:
Graph 0.085 (8.500%)
Max 0.085 (8.500%)
Deviations:
Graph 12.000
Bubble 12.000
Repeat 6.000
Similarity Thresholds:
Graph 0.000
Bubble 0.100
Repeat 0.100
Edge Confusion:
Absolute 2100
Percent 200.0000
Unitig Construction:
Minimum intersection 500 bases
Maxiumum placements 2 positions
Debugging Enabled:
(none)
==> LOADING AND FILTERING OVERLAPS.
sqStore_loadMetadata()-- Using 'corrected-trimmed' 0x10 reads.
ReadInfo()-- Using 13066705 reads, no minimum read length used.
OverlapCache()-- limited to 122880MB memory (user supplied).
OverlapCache()-- 99MB for read data.
OverlapCache()-- 398MB for best edges.
OverlapCache()-- 1295MB for tigs.
OverlapCache()-- 348MB for tigs - read layouts.
OverlapCache()-- 498MB for tigs - error profiles.
OverlapCache()-- 30720MB for tigs - error profile overlaps.
OverlapCache()-- 12890MB for other processes.
OverlapCache()-- ---------
OverlapCache()-- 46501MB for data structures (sum of above).
OverlapCache()-- ---------
OverlapCache()-- 249MB for overlap store structure.
OverlapCache()-- 76129MB for overlap data.
OverlapCache()-- ---------
OverlapCache()-- 122880MB allowed.
OverlapCache()--
OverlapCache()-- Retain at least 49 overlaps/read, based on 24.51x coverage.
OverlapCache()-- Initial guess at 381 overlaps/read.
OverlapCache()--
OverlapCache()-- Adjusting for sparse overlaps.
OverlapCache()--
OverlapCache()-- reads loading olaps olaps memory
OverlapCache()-- olaps/read all some loaded free
OverlapCache()-- ---------- ------- ------- ----------- ------- --------
OverlapCache()-- 381 8912732 4153973 2670750356 46.69% 35377 MB
OverlapCache()-- 939 11202944 1863761 4240043400 74.13% 11431 MB
OverlapCache()-- 1340 11938878 1127827 531789490 84.39% 2479 MB
OverlapCache()-- 1484 12122076 944629 680706689 86.99% 206 MB
OverlapCache()-- 1498 12138239 928466 693826470 87.22% 6 MB
OverlapCache()--
OverlapCache()-- Loading overlaps.
OverlapCache()--
OverlapCache()-- read from store saved in cache
OverlapCache()-- ------------ --------- ------------ ---------
OverlapCache()-- 41560581 (000.73%) 36134880 (000.63%)
OverlapCache()-- 84746468 (001.48%) 73716353 (001.29%)
OverlapCache()-- 127774283 (002.23%) 111211142 (001.94%)
OverlapCache()-- 172287456 (003.01%) 149696721 (002.62%)
OverlapCache()-- 215650520 (003.77%) 187260435 (003.27%)
OverlapCache()-- 259225282 (004.53%) 225024430 (003.93%)
OverlapCache()-- 302505057 (005.29%) 262812507 (004.59%)
OverlapCache()-- 345795227 (006.05%) 300504962 (005.25%)
OverlapCache()-- 388416681 (006.79%) 337708756 (005.90%)
OverlapCache()-- 432216937 (007.56%) 375819289 (006.57%)
OverlapCache()-- 475081984 (008.31%) 413229209 (007.22%)
OverlapCache()-- 517225067 (009.04%) 450221646 (007.87%)
OverlapCache()-- 560817149 (009.80%) 488095206 (008.53%)
OverlapCache()-- 604144073 (010.56%) 525712799 (009.19%)
OverlapCache()-- 648880791 (011.34%) 564146579 (009.86%)
OverlapCache()-- 693161699 (012.12%) 602523558 (010.53%)
OverlapCache()-- 736505407 (012.88%) 640161810 (011.19%)
OverlapCache()-- 780028764 (013.64%) 678035570 (011.85%)
OverlapCache()-- 824364505 (014.41%) 716058311 (012.52%)
OverlapCache()-- 868324876 (015.18%) 754319430 (013.19%)
OverlapCache()-- 913155225 (015.96%) 792916397 (013.86%)
OverlapCache()-- 957366036 (016.74%) 830986947 (014.53%)
OverlapCache()-- 1001508335 (017.51%) 869036571 (015.19%)
OverlapCache()-- 1045342918 (018.28%) 906881301 (015.86%)
OverlapCache()-- 1090489707 (019.07%) 945749441 (016.53%)
OverlapCache()-- 1136258113 (019.87%) 984959862 (017.22%)
OverlapCache()-- 1182699675 (020.68%) 1024242493 (017.91%)
OverlapCache()-- 1228618963 (021.48%) 1063741566 (018.60%)
OverlapCache()-- 1274196514 (022.28%) 1102818505 (019.28%)
OverlapCache()-- 1319504765 (023.07%) 1141659308 (019.96%)
OverlapCache()-- 1365511550 (023.87%) 1181179331 (020.65%)
OverlapCache()-- 1409886806 (024.65%) 1219876700 (021.33%)
OverlapCache()-- 1453940821 (025.42%) 1258065288 (021.99%)
OverlapCache()-- 1498885214 (026.21%) 1296739011 (022.67%)
OverlapCache()-- 1545005558 (027.01%) 1335890534 (023.36%)
OverlapCache()-- 1592267632 (027.84%) 1375722299 (024.05%)
OverlapCache()-- 1640105989 (028.67%) 1415986296 (024.76%)
OverlapCache()-- 1687227767 (029.50%) 1456011685 (025.46%)
OverlapCache()-- 1734200184 (030.32%) 1495999165 (026.15%)
OverlapCache()-- 1780692947 (031.13%) 1535425417 (026.84%)
OverlapCache()-- 1828421800 (031.97%) 1576007846 (027.55%)
OverlapCache()-- 1875641445 (032.79%) 1615832102 (028.25%)
OverlapCache()-- 1923741692 (033.63%) 1656256798 (028.96%)
OverlapCache()-- 1970765986 (034.46%) 1695982769 (029.65%)
OverlapCache()-- 2018596202 (035.29%) 1736558000 (030.36%)
OverlapCache()-- 2064790309 (036.10%) 1775995194 (031.05%)
OverlapCache()-- 2111885099 (036.92%) 1816430628 (031.76%)
OverlapCache()-- 2158187752 (037.73%) 1856012555 (032.45%)
OverlapCache()-- 2205198946 (038.55%) 1895875819 (033.15%)
OverlapCache()-- 2251542255 (039.36%) 1935248375 (033.83%)
OverlapCache()-- 2297888547 (040.17%) 1974947300 (034.53%)
OverlapCache()-- 2343751932 (040.98%) 2014098665 (035.21%)
OverlapCache()-- 2389476737 (041.78%) 2053296140 (035.90%)
OverlapCache()-- 2435402940 (042.58%) 2092594905 (036.59%)
OverlapCache()-- 2477732730 (043.32%) 2129352009 (037.23%)
OverlapCache()-- 2519035704 (044.04%) 2165025002 (037.85%)
OverlapCache()-- 2560670900 (044.77%) 2201140257 (038.48%)
OverlapCache()-- 2602530726 (045.50%) 2237415328 (039.12%)
OverlapCache()-- 2644584349 (046.24%) 2273648723 (039.75%)
OverlapCache()-- 2685246575 (046.95%) 2308921955 (040.37%)
OverlapCache()-- 2725700779 (047.65%) 2344206941 (040.98%)
OverlapCache()-- 2766148348 (048.36%) 2379547339 (041.60%)
OverlapCache()-- 2808536295 (049.10%) 2416226438 (042.24%)
OverlapCache()-- 2849916908 (049.83%) 2452332228 (042.87%)
OverlapCache()-- 2891383432 (050.55%) 2488443266 (043.51%)
OverlapCache()-- 2932537066 (051.27%) 2524195437 (044.13%)
OverlapCache()-- 2974858267 (052.01%) 2560972053 (044.77%)
OverlapCache()-- 3017021510 (052.75%) 2597624874 (045.41%)
OverlapCache()-- 3059036141 (053.48%) 2634073367 (046.05%)
OverlapCache()-- 3100100522 (054.20%) 2669992594 (046.68%)
OverlapCache()-- 3141445493 (054.92%) 2706011442 (047.31%)
OverlapCache()-- 3182508874 (055.64%) 2741976352 (047.94%)
OverlapCache()-- 3225074510 (056.38%) 2779241589 (048.59%)
OverlapCache()-- 3265627474 (057.09%) 2814608899 (049.21%)
OverlapCache()-- 3307125397 (057.82%) 2851058887 (049.85%)
OverlapCache()-- 3348103643 (058.54%) 2887009832 (050.47%)
OverlapCache()-- 3390423922 (059.28%) 2923857740 (051.12%)
OverlapCache()-- 3431051641 (059.99%) 2959239805 (051.74%)
OverlapCache()-- 3471661363 (060.70%) 2995079935 (052.36%)
OverlapCache()-- 3513191833 (061.42%) 3031296704 (053.00%)
OverlapCache()-- 3556697233 (062.18%) 3068293456 (053.64%)
OverlapCache()-- 3600446488 (062.95%) 3105397008 (054.29%)
OverlapCache()-- 3644309938 (063.71%) 3142767849 (054.95%)
OverlapCache()-- 3687531415 (064.47%) 3180015455 (055.60%)
OverlapCache()-- 3730397011 (065.22%) 3216834662 (056.24%)
OverlapCache()-- 3772634260 (065.96%) 3253196420 (056.88%)
OverlapCache()-- 3815993510 (066.72%) 3290454756 (057.53%)
OverlapCache()-- 3858946617 (067.47%) 3327351154 (058.17%)
OverlapCache()-- 3901626333 (068.21%) 3364086913 (058.81%)
OverlapCache()-- 3944106556 (068.96%) 3401052818 (059.46%)
OverlapCache()-- 3987094530 (069.71%) 3438060084 (060.11%)
OverlapCache()-- 4030044269 (070.46%) 3475030769 (060.75%)
OverlapCache()-- 4073718821 (071.22%) 3512455826 (061.41%)
OverlapCache()-- 4117572983 (071.99%) 3549933498 (062.06%)
OverlapCache()-- 4160318597 (072.74%) 3586654518 (062.71%)
OverlapCache()-- 4202319875 (073.47%) 3622928580 (063.34%)
OverlapCache()-- 4244883484 (074.21%) 3659660763 (063.98%)
OverlapCache()-- 4286916222 (074.95%) 3695887922 (064.62%)
OverlapCache()-- 4330329382 (075.71%) 3732961076 (065.26%)
OverlapCache()-- 4372887035 (076.45%) 3769503073 (065.90%)
OverlapCache()-- 4415194389 (077.19%) 3806048601 (066.54%)
OverlapCache()-- 4456906915 (077.92%) 3842048940 (067.17%)
OverlapCache()-- 4499095835 (078.66%) 3878567325 (067.81%)
OverlapCache()-- 4541731564 (079.40%) 3914989771 (068.45%)
OverlapCache()-- 4584198592 (080.15%) 3951437841 (069.08%)
OverlapCache()-- 4626932006 (080.89%) 3988284516 (069.73%)
OverlapCache()-- 4673905864 (081.71%) 4028914091 (070.44%)
OverlapCache()-- 4722596235 (082.57%) 4070780002 (071.17%)
OverlapCache()-- 4771348483 (083.42%) 4112737659 (071.90%)
OverlapCache()-- 4819171112 (084.25%) 4153934829 (072.62%)
OverlapCache()-- 4867060444 (085.09%) 4195306914 (073.35%)
OverlapCache()-- 4914571551 (085.92%) 4236061358 (074.06%)
OverlapCache()-- 4962805245 (086.77%) 4277292200 (074.78%)
OverlapCache()-- 5010102003 (087.59%) 4317818812 (075.49%)
OverlapCache()-- 5056471788 (088.40%) 4357973871 (076.19%)
OverlapCache()-- 5104352836 (089.24%) 4398878603 (076.91%)
OverlapCache()-- 5151500553 (090.06%) 4439533843 (077.62%)
OverlapCache()-- 5198409878 (090.88%) 4479681652 (078.32%)
OverlapCache()-- 5241007390 (091.63%) 4516926238 (078.97%)
OverlapCache()-- 5281753691 (092.34%) 4552726601 (079.60%)
OverlapCache()-- 5322765484 (093.06%) 4588644189 (080.22%)
OverlapCache()-- 5363616272 (093.77%) 4624487726 (080.85%)
OverlapCache()-- 5404859761 (094.49%) 4660362312 (081.48%)
OverlapCache()-- 5446434830 (095.22%) 4696593664 (082.11%)
OverlapCache()-- 5487615260 (095.94%) 4732607281 (082.74%)
OverlapCache()-- 5528115091 (096.65%) 4768290049 (083.36%)
OverlapCache()-- 5568470276 (097.35%) 4803596134 (083.98%)
OverlapCache()-- 5609992229 (098.08%) 4840048648 (084.62%)
OverlapCache()-- 5651065168 (098.80%) 4876019781 (085.25%)
OverlapCache()-- 5692250844 (099.52%) 4911985563 (085.88%)
OverlapCache()-- ------------ --------- ------------ ---------
OverlapCache()-- 5719799114 (100.00%) 4936065994 (086.30%)
OverlapCache()--
OverlapCache()-- Ignored 61466148 duplicate overlaps.
OverlapCache()--
OverlapCache()-- Symmetrizing overlaps.
OverlapCache()-- Finding missing twins.
OverlapCache()-- Found 679159097 overlaps with non-symmetric error rates.
OverlapCache()-- Found 560905114 missing twins in 4936065994 overlaps, 747716 are strong.
OverlapCache()-- Dropping weak non-twin overlaps; allocated 0 MB scratch space.
OverlapCache()-- Dropped 139413811 overlaps; scratch space released.
OverlapCache()-- Adding 421491303 missing twin overlaps.
OverlapCache()-- Finished.
BestOverlapGraph()-- Computing Best Overlap Graph.
BestOverlapGraph()-- Allocating best edges (398MB).
BestOverlapGraph()-- Filtering high error edges.
BestOverlapGraph()-- Ignore overlaps with more than 6.423634% error.
BestOverlapGraph()-- Filtering reads with a gap in overlap coverage.
BestOverlapGraph()-- 14474 reads removed.
BestOverlapGraph()-- Filtering reads with lopsided best edges (more than 50% different).
BestOverlapGraph()-- 173783 reads have lopsided edges.
BestOverlapGraph()-- Filtering spur reads.
BestOverlapGraph()-- After initial scan, found:
BestOverlapGraph()-- 412016 spur reads.
BestOverlapGraph()-- 485911 5' spur paths.
BestOverlapGraph()-- 466758 3' spur paths.
BestOverlapGraph()-- After iteration 1, found:
BestOverlapGraph()-- 412016 spur reads.
BestOverlapGraph()-- 478074 5' spur paths; 49196 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 457159 3' spur paths; 60860 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 2, found:
BestOverlapGraph()-- 412016 spur reads.
BestOverlapGraph()-- 474273 5' spur paths; 3392 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 450834 3' spur paths; 3449 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 3, found:
BestOverlapGraph()-- 412016 spur reads.
BestOverlapGraph()-- 473922 5' spur paths; 339 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 450501 3' spur paths; 371 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 4, found:
BestOverlapGraph()-- 412016 spur reads.
BestOverlapGraph()-- 473900 5' spur paths; 22 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 450491 3' spur paths; 44 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 5, found:
BestOverlapGraph()-- 412016 spur reads.
BestOverlapGraph()-- 473899 5' spur paths; 1 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 450490 3' spur paths; 0 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 6, found:
BestOverlapGraph()-- 412016 spur reads.
BestOverlapGraph()-- 473899 5' spur paths; 0 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 450489 3' spur paths; 2 5' edges changed to avoid a spur path.
Cleared 0 5' and 0 3' best edges on contained reads.
==> BUILDING GREEDY TIGS.
breakSingletonTigs()-- Removed 372723 singleton tigs; reads are now unplaced.
optimizePositions()-- Optimizing read positions for 13066706 reads in 525613 tigs, with 48 threads.
optimizePositions()-- Allocating scratch space for 13066706 reads (408334 KB).
optimizePositions()-- Initializing positions with 48 threads.
optimizePositions()-- Recomputing positions, iteration 1, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 12983611 reads
optimizePositions()-- changed: 83095 reads
optimizePositions()-- Recomputing positions, iteration 2, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 13026055 reads
optimizePositions()-- changed: 40651 reads
optimizePositions()-- Recomputing positions, iteration 3, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 13061761 reads
optimizePositions()-- changed: 4945 reads
optimizePositions()-- Recomputing positions, iteration 4, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 13064625 reads
optimizePositions()-- changed: 2081 reads
optimizePositions()-- Recomputing positions, iteration 5, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 13065287 reads
optimizePositions()-- changed: 1419 reads
optimizePositions()-- Expanding short reads with 48 threads.
optimizePositions()-- Updating positions.
optimizePositions()-- Finished.
splitDiscontinuous()-- Tested 152923 tigs, split 19 tigs into 38 new tigs.
detectSpur() done.
tested 37043
nEdges 5' 25055 3' 26146
nPotential 3686 3878
nVerified 122 118
==> PLACE CONTAINED READS.
computeErrorProfiles()-- Computing error profiles for 525651 tigs, with 48 threads.
computeErrorProfiles()-- Finished.
placeContains()-- placing 11391637 contained and 387197 unplaced reads, with 48 threads.
placeContains()-- Placed 10584191 contained reads and 81121 unplaced reads.
placeContains()-- Failed to place 807446 contained reads (too high error suspected) and 306076 unplaced reads (lack of overlaps suspected).
optimizePositions()-- Optimizing read positions for 13066706 reads in 525651 tigs, with 48 threads.
optimizePositions()-- Allocating scratch space for 13066706 reads (408334 KB).
optimizePositions()-- Initializing positions with 48 threads.
optimizePositions()-- Recomputing positions, iteration 1, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 10358262 reads
optimizePositions()-- changed: 2708444 reads
optimizePositions()-- Recomputing positions, iteration 2, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 12519596 reads
optimizePositions()-- changed: 547110 reads
optimizePositions()-- Recomputing positions, iteration 3, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 12832763 reads
optimizePositions()-- changed: 233943 reads
optimizePositions()-- Recomputing positions, iteration 4, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 12936543 reads
optimizePositions()-- changed: 130163 reads
optimizePositions()-- Recomputing positions, iteration 5, with 48 threads.
optimizePositions()-- Reset zero.
optimizePositions()-- Checking convergence.
optimizePositions()-- converged: 12969447 reads
optimizePositions()-- changed: 97259 reads
optimizePositions()-- Expanding short reads with 48 threads.
optimizePositions()-- Updating positions.
optimizePositions()-- Finished.
splitDiscontinuous()-- Tested 153013 tigs, split 55 tigs into 112 new tigs.
==> MERGE ORPHANS.
computeErrorProfiles()-- Computing error profiles for 525763 tigs, with 48 threads.
computeErrorProfiles()-- Finished.
findPotentialOrphans()-- working on 525763 tigs.
findPotentialOrphans()-- found 57408 potential orphans.
mergeOrphans()-- flagged 31165 bubble tigs with 1230257 reads
mergeOrphans()-- placed 210 unique orphan tigs with 453 reads
mergeOrphans()-- shattered 486 repeat orphan tigs with 1006 reads
mergeOrphans()-- ignored 0 tigs with 0 reads; failed to place
mergeOrphans()--
----------------------------------------
Building new graph after removing 10666748 placed reads and 1230257 bubble reads.
BestOverlapGraph()-- Computing Best Overlap Graph.
BestOverlapGraph()-- Allocating best edges (398MB).
BestOverlapGraph()-- Filtering high error edges.
BestOverlapGraph()-- Ignore overlaps with more than 6.315305% error.
BestOverlapGraph()-- Filtering reads with a gap in overlap coverage.
BestOverlapGraph()-- 72540 reads removed.
BestOverlapGraph()-- Filtering reads with lopsided best edges (more than 50% different).
BestOverlapGraph()-- 297165 reads have lopsided edges.
BestOverlapGraph()-- Filtering spur reads.
BestOverlapGraph()-- After initial scan, found:
BestOverlapGraph()-- 565499 spur reads.
BestOverlapGraph()-- 660489 5' spur paths.
BestOverlapGraph()-- 637847 3' spur paths.
BestOverlapGraph()-- After iteration 1, found:
BestOverlapGraph()-- 565499 spur reads.
BestOverlapGraph()-- 652162 5' spur paths; 53522 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 628229 3' spur paths; 62199 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 2, found:
BestOverlapGraph()-- 565499 spur reads.
BestOverlapGraph()-- 647701 5' spur paths; 4104 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 621990 3' spur paths; 4159 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 3, found:
BestOverlapGraph()-- 565499 spur reads.
BestOverlapGraph()-- 647210 5' spur paths; 521 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 621510 3' spur paths; 513 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 4, found:
BestOverlapGraph()-- 565499 spur reads.
BestOverlapGraph()-- 647192 5' spur paths; 17 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 621499 3' spur paths; 32 5' edges changed to avoid a spur path.
BestOverlapGraph()-- After iteration 5, found:
BestOverlapGraph()-- 565499 spur reads.
BestOverlapGraph()-- 647192 5' spur paths; 0 5' edges changed to avoid a spur path.
BestOverlapGraph()-- 621499 3' spur paths; 0 5' edges changed to avoid a spur path.
Cleared 0 5' and 0 3' best edges on contained reads.
classifyAsUnassembled()-- 7 tigs 26659 bases -- singleton
classifyAsUnassembled()-- 0 tigs 0 bases -- too few reads (< 2 reads)
classifyAsUnassembled()-- 0 tigs 0 bases -- too short (< 0 bp)
classifyAsUnassembled()-- 0 tigs 0 bases -- single spanning read (> 1.000000 tig length)
classifyAsUnassembled()-- 2266 tigs 32382999 bases -- low coverage (> 0.500000 tig length at < 3 coverage)
classifyAsUnassembled()-- 151170 tigs 3868425641 bases -- acceptable contigs
==> GENERATING ASSEMBLY GRAPH.
computeErrorProfiles()-- Computing error profiles for 525763 tigs, with 48 threads.
computeErrorProfiles()-- Finished.
AssemblyGraph()-- allocating vectors for placements, 598.146MB
AssemblyGraph()-- finding edges for 11953183 reads (10584191 contained), ignoring 1113522 unplaced reads, with 48 threads.
There's no Bogart error in your posted log, the only message is:
/var/spool/slurmd/job423491/slurm_script: line 128: 4444 Killed $bin/bogart -S ../../lculinaris.seqStore -O ../lculinaris.ovlStore -o ./lculinaris -gs 4000000000 -eg 0.085 -eM 0.085 -mo 500 -covgapolap 500 -lopsided nobest 50 -minolappercent 0.0 -dg 12 -db 12 -dr 6 -ca 2100 -cp 200 -threads 48 -M 120 -unassembled 2 0 1.0 0.5 3 > ./unitigger.err 2>&1
so I presume your cluster killed the Bogart run due to memory or a timeout. Check your cluster job history to see why the job failed (did it use too much memory, did it hit a timeout). You can probably edit the command to lower the memory further to -M 100 or even -M 60 without issue. Run the unitigging.sh script by hand once you figure out the issue again and then you can resume Canu.
Thank @skoren for your answer. I tried run the unitigging.sh but I get this error:
Found perl:
/usr/bin/perl
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
Found java:
/mnt/stori/home/fk8jybr/.linuxbrew/bin/java
openjdk version "1.8.0_242"
Found canu:
/mnt/stori/home/fk8jybr/canu-2.0/Linux-amd64/bin/canu
Canu 2.0
Error: I need SLURM_ARRAY_TASK_ID set, or a job index on the command line.
You need to provide a job number to unitigging.sh so run it as unitigging.sh.sh 1
I tried what you suggested. I increased the CPU and memory to 30 CPU and 60 GB ram in the unitigging.sh
. But I did not change the CPU and memory in batch script. So, I allocated 48 CPU and 124 GB ram in the batch script but I used only 30 CPU and 60 GB ram in the unitigging.sh
script. With this solution the script is successfully finished and after this I rerun canu
.
I have got a pacbio raw data for about 4Gbp plant genome with about 32x coverage. I use the canu 2.0. I used on cluster with only one node (48 CPU core and 126 GB ram)
Firstly, I tried only error correction on my raw data fastq file.
canu -correct -p lculinaris -d $HOME/output/canu_trim/lculinaris genomeSize=4.0g batMemory=124M -pacbio $HOME/input/pacbio_raw/LC001pacbio.fastq.gz
I got this output report from the SLURM:
I got this error report from canu:
Error log from the meryl-count job:
Can you help me with this issue?