marbl / canu

A single molecule sequence assembler for genomes large and small.
http://canu.readthedocs.io/
654 stars 179 forks source link

canu keeps failing. HELP! #525

Closed raw937 closed 7 years ago

raw937 commented 7 years ago

Hello,

I am using the current linux-amd64. Canu keeps failing at the filterCorrectionOverlaps with no error output. Or it just stops... I have a large memory box of 2TB.

commands I have tired:

canu -p RA -d RA-auto genomeSize=500m -pacbio-raw All_pacbio_data.fastq useGrid=false
canu -correct -p RA -d RA_correct -genomeSize=500m -pacbio-raw All_pacbio_data.fastq useGrid=false

It's 500 Mbp genome at 41 GB of raw pacbio data from RSII technology

Its a plant genome with polyploid but I can send more error reports. I really need to get this assembled asap! HELP....

Using java 1.8.0_31

Outputs:

command one

contigFilter 2 1000 0.75 0.75 2 num 5
-- Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
--
-- CITATIONS
--
-- Koren S, Walenz BP, Berlin K, Miller JR, Phillippy AM.
-- Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation.
-- Genome Res. 2017 May;27(5):722-736.
-- http://doi.org/10.1101/gr.215087.116
-- 
-- Read and contig alignments during correction, consensus and GFA building use:
--   Šošic M, Šikic M.
--   Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance.
--   Bioinformatics. 2017 May 1;33(9):1394-1395.
--   http://doi.org/10.1093/bioinformatics/btw753
-- 
-- Overlaps are generated using:
--   Berlin K, et al.
--   Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.
--   Nat Biotechnol. 2015 Jun;33(6):623-30.
--   http://doi.org/10.1038/nbt.3238
-- 
--   Myers EW, et al.
--   A Whole-Genome Assembly of Drosophila.
--   Science. 2000 Mar 24;287(5461):2196-204.
--   http://doi.org/10.1126/science.287.5461.2196
-- 
--   Li H.
--   Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences.
--   Bioinformatics. 2016 Jul 15;32(14):2103-10.
--   http://doi.org/10.1093/bioinformatics/btw152
-- 
-- Corrected read consensus sequences are generated using an algorithm derived from FALCON-sense:
--   Chin CS, et al.
--   Phased diploid genome assembly with single-molecule real-time sequencing.
--   Nat Methods. 2016 Dec;13(12):1050-1054.
--   http://doi.org/10.1038/nmeth.4035
-- 
-- Contig consensus sequences are generated using an algorithm derived from pbdagcon:
--   Chin CS, et al.
--   Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data.
--   Nat Methods. 2013 Jun;10(6):563-9
--   http://doi.org/10.1038/nmeth.2474
-- 
-- CONFIGURE CANU
--
-- Detected Java(TM) Runtime Environment '1.8.0_31' (from '/share/apps/jdk/1.8.0_31/bin/java').
-- Detected gnuplot version '4.6 patchlevel 5' (from 'gnuplot') and image format 'png'.
-- Detected 80 CPUs and 2020 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Grid engine disabled per useGrid=false option.
--
--                            (tag)Concurrency
--                     (tag)Threads          |
--            (tag)Memory         |          |
--        (tag)         |         |          |  algorithm
--        -------  ------  --------   --------  -----------------------------
-- Local: meryl     64 GB   16 CPUs x   5 jobs  (k-mer counting)
-- Local: cormhap   32 GB   16 CPUs x   5 jobs  (overlap detection with mhap)
-- Local: obtovl    12 GB    8 CPUs x  10 jobs  (overlap detection)
-- Local: utgovl    12 GB    8 CPUs x  10 jobs  (overlap detection)
-- Local: cor       20 GB    4 CPUs x  20 jobs  (read correction)
-- Local: ovb        4 GB    1 CPU  x  80 jobs  (overlap store bucketizer)
-- Local: ovs       16 GB    1 CPU  x  80 jobs  (overlap store sorting)
-- Local: red        8 GB    8 CPUs x  10 jobs  (read error detection)
-- Local: oea        2 GB    1 CPU  x  80 jobs  (overlap error adjustment)
-- Local: bat      256 GB   16 CPUs x   5 jobs  (contig construction)
-- Local: cns       48 GB    8 CPUs x  10 jobs  (consensus)
-- Local: gfa       16 GB   16 CPUs x   5 jobs  (GFA alignment and processing)
--
-- Generating assembly 'red_alder' in '/pic/projects/red_alder/PacBio_data/red_alder-auto'
--
-- Parameters:
--
--  genomeSize        500000000
--
--  Overlap Generation Limits:
--    corOvlErrorRate 0.2400 ( 24.00%)
--    obtOvlErrorRate 0.0450 (  4.50%)
--    utgOvlErrorRate 0.0450 (  4.50%)
--
--  Overlap Processing Limits:
--    corErrorRate    0.3000 ( 30.00%)
--    obtErrorRate    0.0450 (  4.50%)
--    utgErrorRate    0.0450 (  4.50%)
--    cnsErrorRate    0.0750 (  7.50%)
--
--
-- BEGIN CORRECTION
--
----------------------------------------
-- Starting command on Mon Jun  5 14:18:12 2017 with 510610.967 GB free disk space

    cd correction
    /people/whit040/canu/Linux-amd64/bin/gatekeeperCreate \
      -minlength 1000 \
      -o ./red_alder.gkpStore.BUILDING \
      ./red_alder.gkpStore.gkp \
    > ./red_alder.gkpStore.BUILDING.err 2>&1

-- Finished on Mon Jun  5 14:24:05 2017 (353 seconds) with 510626.637 GB free disk space
----------------------------------------
--
-- In gatekeeper store 'correction/red_alder.gkpStore':
--   Found 2811531 reads.
--   Found 21382870793 bases (42.76 times coverage).
--
--   Read length histogram (one '*' equals 4077.81 reads):
--        0    999      0 
--     1000   1999 285447 **********************************************************************
--     2000   2999 236550 **********************************************************
--     3000   3999 226336 *******************************************************
--     4000   4999 220844 ******************************************************
--     5000   5999 222772 ******************************************************
--     6000   6999 253917 **************************************************************
--     7000   7999 258272 ***************************************************************
--     8000   8999 222325 ******************************************************
--     9000   9999 177346 *******************************************
--    10000  10999 141145 **********************************
--    11000  11999 112454 ***************************
--    12000  12999  90117 **********************
--    13000  13999  72847 *****************
--    14000  14999  58551 **************
--    15000  15999  47776 ***********
--    16000  16999  38469 *********
--    17000  17999  30536 *******
--    18000  18999  24768 ******
--    19000  19999  19144 ****
--    20000  20999  15520 ***
--    21000  21999  12093 **
--    22000  22999   9417 **
--    23000  23999   7519 *
--    24000  24999   5851 *
--    25000  25999   4609 *
--    26000  26999   3560 
--    27000  27999   2800 
--    28000  28999   2142 
--    29000  29999   1722 
--    30000  30999   1279 
--    31000  31999   1050 
--    32000  32999    799 
--    33000  33999    632 
--    34000  34999    580 
--    35000  35999    431 
--    36000  36999    366 
--    37000  37999    282 
--    38000  38999    230 
--    39000  39999    176 
--    40000  40999    151 
--    41000  41999    144 
--    42000  42999     81 
--    43000  43999     85 
--    44000  44999     64 
--    45000  45999     69 
--    46000  46999     40 
--    47000  47999     46 
--    48000  48999     41 
--    49000  49999     24 
--    50000  50999     20 
--    51000  51999     15 
--    52000  52999     11 
--    53000  53999     11 
--    54000  54999     10 
--    55000  55999      8 
--    56000  56999      4 
--    57000  57999      4 
--    58000  58999      8 
--    59000  59999      4 
--    60000  60999      4 
--    61000  61999      0 
--    62000  62999      1 
--    63000  63999      2 
--    64000  64999      2 
--    65000  65999      3 
--    66000  66999      2 
--    67000  67999      1 
--    68000  68999      0 
--    69000  69999      1 
--    70000  70999      1 
-- Finished stage 'cor-gatekeeper', reset canuIteration.
-- Finished stage 'merylConfigure', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'meryl' concurrent execution on Mon Jun  5 14:24:46 2017 with 510617.658 GB free disk space (1 processes; 5 concurrently)

    cd correction/0-mercounts
    ./meryl.sh 1 > ./meryl.000001.out 2>&1

-- Finished on Mon Jun  5 15:07:30 2017 (2564 seconds) with 510717.138 GB free disk space
----------------------------------------
-- Meryl finished successfully.
-- Finished stage 'merylCheck', reset canuIteration.
--
--  16-mers                                                                                           Fraction
--    Occurrences   NumMers                                                                         Unique Total
--       1-     1 260923796 *******************************************                            0.1311 0.0122
--       2-     2 270090218 ********************************************                           0.2668 0.0375
--       3-     4 421559464 ********************************************************************** 0.3840 0.0703
--       5-     7 363844185 ************************************************************           0.5535 0.1406
--       8-    11 245285830 ****************************************                               0.7008 0.2349
--      12-    16 154436899 *************************                                              0.8047 0.3346
--      17-    22  95371117 ***************                                                        0.8726 0.4279
--      23-    29  59128886 *********                                                              0.9156 0.5090
--      30-    37  37246447 ******                                                                 0.9428 0.5767
--      38-    46  24008183 ***                                                                    0.9602 0.6321
--      47-    56  15893333 **                                                                     0.9716 0.6773
--      57-    67  10802170 *                                                                      0.9792 0.7142
--      68-    79   7562216 *                                                                      0.9844 0.7446
--      80-    92   5441502                                                                        0.9881 0.7699
--      93-   106   4013968                                                                        0.9907 0.7913
--     107-   121   3000712                                                                        0.9927 0.8097
--     122-   137   2274562                                                                        0.9942 0.8254
--     138-   154   1746320                                                                        0.9953 0.8390
--     155-   172   1349130                                                                        0.9961 0.8507
--     173-   191   1050865                                                                        0.9968 0.8609
--     192-   211    826698                                                                        0.9973 0.8698
--     212-   232    659717                                                                        0.9977 0.8775
--     233-   254    532960                                                                        0.9981 0.8843
--     255-   277    439248                                                                        0.9983 0.8903
--     278-   301    366645                                                                        0.9985 0.8958
--     302-   326    309908                                                                        0.9987 0.9007
--     327-   352    263225                                                                        0.9989 0.9052
--     353-   379    226015                                                                        0.9990 0.9094
--     380-   407    193906                                                                        0.9991 0.9132
--     408-   436    165653                                                                        0.9992 0.9168
--     437-   466    142045                                                                        0.9993 0.9201
--     467-   497    122302                                                                        0.9994 0.9230
--     498-   529    105394                                                                        0.9994 0.9258
--     530-   562     90636                                                                        0.9995 0.9283
--     563-   596     78960                                                                        0.9995 0.9306
--     597-   631     69050                                                                        0.9996 0.9328
--     632-   667     60613                                                                        0.9996 0.9347
--     668-   704     53835                                                                        0.9996 0.9366
--     705-   742     47774                                                                        0.9997 0.9383
--     743-   781     42084                                                                        0.9997 0.9399
--     782-   821     37483                                                                        0.9997 0.9414
--
--    16588315 (max occurrences)
-- 21079774032 (total mers, non-unique)
--  1729483449 (distinct mers, non-unique)
--   260923796 (unique mers)
-- For mhap overlapping, set repeat k-mer threshold to 213406.
--
-- Found 21340697828 16-mers; 1990407245 distinct and 260923796 unique.  Largest count 16588315.
-- Finished stage 'cor-meryl', reset canuIteration.
--
-- OVERLAPPER (mhap) (correction)
--
-- Set corMhapSensitivity=normal based on read coverage of 42.
--
-- PARAMETERS: hashes=512, minMatches=3, threshold=0.78
--
-- Given 32 GB, can fit 96000 reads per block.
-- For 31 blocks, set stride to 7 blocks.
-- Logging partitioning to 'correction/1-overlapper/partitioning.log'.
-- Configured 30 mhap precompute jobs.
-- Configured 76 mhap overlap jobs.
-- Finished stage 'cor-mhapConfigure', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'cormhap' concurrent execution on Mon Jun  5 15:09:28 2017 with 510783.325 GB free disk space (30 processes; 5 concurrently)

    cd correction/1-overlapper
    ./precompute.sh 1 > ./precompute.000001.out 2>&1
    ./precompute.sh 2 > ./precompute.000002.out 2>&1
    ./precompute.sh 3 > ./precompute.000003.out 2>&1
    ./precompute.sh 4 > ./precompute.000004.out 2>&1
    ./precompute.sh 5 > ./precompute.000005.out 2>&1
    ./precompute.sh 6 > ./precompute.000006.out 2>&1
    ./precompute.sh 7 > ./precompute.000007.out 2>&1
    ./precompute.sh 8 > ./precompute.000008.out 2>&1
    ./precompute.sh 9 > ./precompute.000009.out 2>&1
    ./precompute.sh 10 > ./precompute.000010.out 2>&1
    ./precompute.sh 11 > ./precompute.000011.out 2>&1
    ./precompute.sh 12 > ./precompute.000012.out 2>&1
    ./precompute.sh 13 > ./precompute.000013.out 2>&1
    ./precompute.sh 14 > ./precompute.000014.out 2>&1
    ./precompute.sh 15 > ./precompute.000015.out 2>&1
    ./precompute.sh 16 > ./precompute.000016.out 2>&1
    ./precompute.sh 17 > ./precompute.000017.out 2>&1
    ./precompute.sh 18 > ./precompute.000018.out 2>&1
    ./precompute.sh 19 > ./precompute.000019.out 2>&1
    ./precompute.sh 20 > ./precompute.000020.out 2>&1
    ./precompute.sh 21 > ./precompute.000021.out 2>&1
    ./precompute.sh 22 > ./precompute.000022.out 2>&1
    ./precompute.sh 23 > ./precompute.000023.out 2>&1
    ./precompute.sh 24 > ./precompute.000024.out 2>&1
    ./precompute.sh 25 > ./precompute.000025.out 2>&1
    ./precompute.sh 26 > ./precompute.000026.out 2>&1
    ./precompute.sh 27 > ./precompute.000027.out 2>&1
    ./precompute.sh 28 > ./precompute.000028.out 2>&1
    ./precompute.sh 29 > ./precompute.000029.out 2>&1
    ./precompute.sh 30 > ./precompute.000030.out 2>&1

-- Finished on Mon Jun  5 18:30:29 2017 (12061 seconds) with 511330.906 GB free disk space
----------------------------------------
-- All 30 mhap precompute jobs finished successfully.
-- Finished stage 'cor-mhapPrecomputeCheck', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'cormhap' concurrent execution on Mon Jun  5 18:30:30 2017 with 511330.906 GB free disk space (76 processes; 5 concurrently)

    cd correction/1-overlapper
    ./mhap.sh 1 > ./mhap.000001.out 2>&1
    ./mhap.sh 2 > ./mhap.000002.out 2>&1
    ./mhap.sh 3 > ./mhap.000003.out 2>&1
    ./mhap.sh 4 > ./mhap.000004.out 2>&1
    ./mhap.sh 5 > ./mhap.000005.out 2>&1
    ./mhap.sh 6 > ./mhap.000006.out 2>&1
    ./mhap.sh 7 > ./mhap.000007.out 2>&1
    ./mhap.sh 8 > ./mhap.000008.out 2>&1
    ./mhap.sh 9 > ./mhap.000009.out 2>&1
    ./mhap.sh 10 > ./mhap.000010.out 2>&1
    ./mhap.sh 11 > ./mhap.000011.out 2>&1
    ./mhap.sh 12 > ./mhap.000012.out 2>&1
    ./mhap.sh 13 > ./mhap.000013.out 2>&1
    ./mhap.sh 14 > ./mhap.000014.out 2>&1
    ./mhap.sh 15 > ./mhap.000015.out 2>&1
    ./mhap.sh 16 > ./mhap.000016.out 2>&1
    ./mhap.sh 17 > ./mhap.000017.out 2>&1
    ./mhap.sh 18 > ./mhap.000018.out 2>&1
    ./mhap.sh 19 > ./mhap.000019.out 2>&1
    ./mhap.sh 20 > ./mhap.000020.out 2>&1
    ./mhap.sh 21 > ./mhap.000021.out 2>&1
    ./mhap.sh 22 > ./mhap.000022.out 2>&1
    ./mhap.sh 23 > ./mhap.000023.out 2>&1
    ./mhap.sh 24 > ./mhap.000024.out 2>&1
    ./mhap.sh 25 > ./mhap.000025.out 2>&1
    ./mhap.sh 26 > ./mhap.000026.out 2>&1
    ./mhap.sh 27 > ./mhap.000027.out 2>&1
    ./mhap.sh 28 > ./mhap.000028.out 2>&1
    ./mhap.sh 29 > ./mhap.000029.out 2>&1
    ./mhap.sh 30 > ./mhap.000030.out 2>&1
    ./mhap.sh 31 > ./mhap.000031.out 2>&1
    ./mhap.sh 32 > ./mhap.000032.out 2>&1
    ./mhap.sh 33 > ./mhap.000033.out 2>&1
    ./mhap.sh 34 > ./mhap.000034.out 2>&1
    ./mhap.sh 35 > ./mhap.000035.out 2>&1
    ./mhap.sh 36 > ./mhap.000036.out 2>&1
    ./mhap.sh 37 > ./mhap.000037.out 2>&1
    ./mhap.sh 38 > ./mhap.000038.out 2>&1
    ./mhap.sh 39 > ./mhap.000039.out 2>&1
    ./mhap.sh 40 > ./mhap.000040.out 2>&1
    ./mhap.sh 41 > ./mhap.000041.out 2>&1
    ./mhap.sh 42 > ./mhap.000042.out 2>&1
    ./mhap.sh 43 > ./mhap.000043.out 2>&1
    ./mhap.sh 44 > ./mhap.000044.out 2>&1
    ./mhap.sh 45 > ./mhap.000045.out 2>&1
    ./mhap.sh 46 > ./mhap.000046.out 2>&1
    ./mhap.sh 47 > ./mhap.000047.out 2>&1
    ./mhap.sh 48 > ./mhap.000048.out 2>&1
    ./mhap.sh 49 > ./mhap.000049.out 2>&1
    ./mhap.sh 50 > ./mhap.000050.out 2>&1
    ./mhap.sh 51 > ./mhap.000051.out 2>&1
    ./mhap.sh 52 > ./mhap.000052.out 2>&1
    ./mhap.sh 53 > ./mhap.000053.out 2>&1
    ./mhap.sh 54 > ./mhap.000054.out 2>&1
    ./mhap.sh 55 > ./mhap.000055.out 2>&1
    ./mhap.sh 56 > ./mhap.000056.out 2>&1
    ./mhap.sh 57 > ./mhap.000057.out 2>&1
    ./mhap.sh 58 > ./mhap.000058.out 2>&1
    ./mhap.sh 59 > ./mhap.000059.out 2>&1
    ./mhap.sh 60 > ./mhap.000060.out 2>&1
    ./mhap.sh 61 > ./mhap.000061.out 2>&1
    ./mhap.sh 62 > ./mhap.000062.out 2>&1
    ./mhap.sh 63 > ./mhap.000063.out 2>&1
    ./mhap.sh 64 > ./mhap.000064.out 2>&1
    ./mhap.sh 65 > ./mhap.000065.out 2>&1
    ./mhap.sh 66 > ./mhap.000066.out 2>&1
    ./mhap.sh 67 > ./mhap.000067.out 2>&1
    ./mhap.sh 68 > ./mhap.000068.out 2>&1
    ./mhap.sh 69 > ./mhap.000069.out 2>&1
    ./mhap.sh 70 > ./mhap.000070.out 2>&1
    ./mhap.sh 71 > ./mhap.000071.out 2>&1
    ./mhap.sh 72 > ./mhap.000072.out 2>&1
    ./mhap.sh 73 > ./mhap.000073.out 2>&1
    ./mhap.sh 74 > ./mhap.000074.out 2>&1
    ./mhap.sh 75 > ./mhap.000075.out 2>&1
    ./mhap.sh 76 > ./mhap.000076.out 2>&1

-- Finished on Mon Jun  5 22:43:32 2017 (15182 seconds) with 510103.641 GB free disk space
----------------------------------------
-- Found 76 mhap overlap output files.
-- Finished stage 'cor-mhapCheck', reset canuIteration.
----------------------------------------
-- Starting command on Mon Jun  5 22:43:33 2017 with 510103.641 GB free disk space

    cd correction
    ./red_alder.ovlStore.BUILDING/scripts/0-config.sh \
    > ./red_alder.ovlStore.BUILDING/config.err 2>&1

-- Finished on Mon Jun  5 22:43:35 2017 (2 seconds) with 510096.968 GB free disk space
----------------------------------------
-- Finished stage 'cor-overlapStoreConfigure', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'ovB' concurrent execution on Mon Jun  5 22:43:35 2017 with 510096.968 GB free disk space (76 processes; 80 concurrently)

    cd correction/red_alder.ovlStore.BUILDING
    ./scripts/1-bucketize.sh 1 > ./logs/1-bucketize.000001.out 2>&1
    ./scripts/1-bucketize.sh 2 > ./logs/1-bucketize.000002.out 2>&1
    ./scripts/1-bucketize.sh 3 > ./logs/1-bucketize.000003.out 2>&1
    ./scripts/1-bucketize.sh 4 > ./logs/1-bucketize.000004.out 2>&1
    ./scripts/1-bucketize.sh 5 > ./logs/1-bucketize.000005.out 2>&1
    ./scripts/1-bucketize.sh 6 > ./logs/1-bucketize.000006.out 2>&1
    ./scripts/1-bucketize.sh 7 > ./logs/1-bucketize.000007.out 2>&1
    ./scripts/1-bucketize.sh 8 > ./logs/1-bucketize.000008.out 2>&1
    ./scripts/1-bucketize.sh 9 > ./logs/1-bucketize.000009.out 2>&1
    ./scripts/1-bucketize.sh 10 > ./logs/1-bucketize.000010.out 2>&1
    ./scripts/1-bucketize.sh 11 > ./logs/1-bucketize.000011.out 2>&1
    ./scripts/1-bucketize.sh 12 > ./logs/1-bucketize.000012.out 2>&1
    ./scripts/1-bucketize.sh 13 > ./logs/1-bucketize.000013.out 2>&1
    ./scripts/1-bucketize.sh 14 > ./logs/1-bucketize.000014.out 2>&1
    ./scripts/1-bucketize.sh 15 > ./logs/1-bucketize.000015.out 2>&1
    ./scripts/1-bucketize.sh 16 > ./logs/1-bucketize.000016.out 2>&1
    ./scripts/1-bucketize.sh 17 > ./logs/1-bucketize.000017.out 2>&1
    ./scripts/1-bucketize.sh 18 > ./logs/1-bucketize.000018.out 2>&1
    ./scripts/1-bucketize.sh 19 > ./logs/1-bucketize.000019.out 2>&1
    ./scripts/1-bucketize.sh 20 > ./logs/1-bucketize.000020.out 2>&1
    ./scripts/1-bucketize.sh 21 > ./logs/1-bucketize.000021.out 2>&1
    ./scripts/1-bucketize.sh 22 > ./logs/1-bucketize.000022.out 2>&1
    ./scripts/1-bucketize.sh 23 > ./logs/1-bucketize.000023.out 2>&1
    ./scripts/1-bucketize.sh 24 > ./logs/1-bucketize.000024.out 2>&1
    ./scripts/1-bucketize.sh 25 > ./logs/1-bucketize.000025.out 2>&1
    ./scripts/1-bucketize.sh 26 > ./logs/1-bucketize.000026.out 2>&1
    ./scripts/1-bucketize.sh 27 > ./logs/1-bucketize.000027.out 2>&1
    ./scripts/1-bucketize.sh 28 > ./logs/1-bucketize.000028.out 2>&1
    ./scripts/1-bucketize.sh 29 > ./logs/1-bucketize.000029.out 2>&1
    ./scripts/1-bucketize.sh 30 > ./logs/1-bucketize.000030.out 2>&1
    ./scripts/1-bucketize.sh 31 > ./logs/1-bucketize.000031.out 2>&1
    ./scripts/1-bucketize.sh 32 > ./logs/1-bucketize.000032.out 2>&1
    ./scripts/1-bucketize.sh 33 > ./logs/1-bucketize.000033.out 2>&1
    ./scripts/1-bucketize.sh 34 > ./logs/1-bucketize.000034.out 2>&1
    ./scripts/1-bucketize.sh 35 > ./logs/1-bucketize.000035.out 2>&1
    ./scripts/1-bucketize.sh 36 > ./logs/1-bucketize.000036.out 2>&1
    ./scripts/1-bucketize.sh 37 > ./logs/1-bucketize.000037.out 2>&1
    ./scripts/1-bucketize.sh 38 > ./logs/1-bucketize.000038.out 2>&1
    ./scripts/1-bucketize.sh 39 > ./logs/1-bucketize.000039.out 2>&1
    ./scripts/1-bucketize.sh 40 > ./logs/1-bucketize.000040.out 2>&1
    ./scripts/1-bucketize.sh 41 > ./logs/1-bucketize.000041.out 2>&1
    ./scripts/1-bucketize.sh 42 > ./logs/1-bucketize.000042.out 2>&1
    ./scripts/1-bucketize.sh 43 > ./logs/1-bucketize.000043.out 2>&1
    ./scripts/1-bucketize.sh 44 > ./logs/1-bucketize.000044.out 2>&1
    ./scripts/1-bucketize.sh 45 > ./logs/1-bucketize.000045.out 2>&1
    ./scripts/1-bucketize.sh 46 > ./logs/1-bucketize.000046.out 2>&1
    ./scripts/1-bucketize.sh 47 > ./logs/1-bucketize.000047.out 2>&1
    ./scripts/1-bucketize.sh 48 > ./logs/1-bucketize.000048.out 2>&1
    ./scripts/1-bucketize.sh 49 > ./logs/1-bucketize.000049.out 2>&1
    ./scripts/1-bucketize.sh 50 > ./logs/1-bucketize.000050.out 2>&1
    ./scripts/1-bucketize.sh 51 > ./logs/1-bucketize.000051.out 2>&1
    ./scripts/1-bucketize.sh 52 > ./logs/1-bucketize.000052.out 2>&1
    ./scripts/1-bucketize.sh 53 > ./logs/1-bucketize.000053.out 2>&1
    ./scripts/1-bucketize.sh 54 > ./logs/1-bucketize.000054.out 2>&1
    ./scripts/1-bucketize.sh 55 > ./logs/1-bucketize.000055.out 2>&1
    ./scripts/1-bucketize.sh 56 > ./logs/1-bucketize.000056.out 2>&1
    ./scripts/1-bucketize.sh 57 > ./logs/1-bucketize.000057.out 2>&1
    ./scripts/1-bucketize.sh 58 > ./logs/1-bucketize.000058.out 2>&1
    ./scripts/1-bucketize.sh 59 > ./logs/1-bucketize.000059.out 2>&1
    ./scripts/1-bucketize.sh 60 > ./logs/1-bucketize.000060.out 2>&1
    ./scripts/1-bucketize.sh 61 > ./logs/1-bucketize.000061.out 2>&1
    ./scripts/1-bucketize.sh 62 > ./logs/1-bucketize.000062.out 2>&1
    ./scripts/1-bucketize.sh 63 > ./logs/1-bucketize.000063.out 2>&1
    ./scripts/1-bucketize.sh 64 > ./logs/1-bucketize.000064.out 2>&1
    ./scripts/1-bucketize.sh 65 > ./logs/1-bucketize.000065.out 2>&1
    ./scripts/1-bucketize.sh 66 > ./logs/1-bucketize.000066.out 2>&1
    ./scripts/1-bucketize.sh 67 > ./logs/1-bucketize.000067.out 2>&1
    ./scripts/1-bucketize.sh 68 > ./logs/1-bucketize.000068.out 2>&1
    ./scripts/1-bucketize.sh 69 > ./logs/1-bucketize.000069.out 2>&1
    ./scripts/1-bucketize.sh 70 > ./logs/1-bucketize.000070.out 2>&1
    ./scripts/1-bucketize.sh 71 > ./logs/1-bucketize.000071.out 2>&1
    ./scripts/1-bucketize.sh 72 > ./logs/1-bucketize.000072.out 2>&1
    ./scripts/1-bucketize.sh 73 > ./logs/1-bucketize.000073.out 2>&1
    ./scripts/1-bucketize.sh 74 > ./logs/1-bucketize.000074.out 2>&1
    ./scripts/1-bucketize.sh 75 > ./logs/1-bucketize.000075.out 2>&1
    ./scripts/1-bucketize.sh 76 > ./logs/1-bucketize.000076.out 2>&1

-- Finished on Mon Jun  5 22:44:30 2017 (55 seconds) with 510004.186 GB free disk space
----------------------------------------
-- Overlap store bucketizer finished.
-- Finished stage 'cor-overlapStoreBucketizerCheck', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'ovS' concurrent execution on Mon Jun  5 22:44:30 2017 with 510004.186 GB free disk space (38 processes; 80 concurrently)

    cd correction/red_alder.ovlStore.BUILDING
    ./scripts/2-sort.sh 1 > ./logs/2-sort.000001.out 2>&1
    ./scripts/2-sort.sh 2 > ./logs/2-sort.000002.out 2>&1
    ./scripts/2-sort.sh 3 > ./logs/2-sort.000003.out 2>&1
    ./scripts/2-sort.sh 4 > ./logs/2-sort.000004.out 2>&1
    ./scripts/2-sort.sh 5 > ./logs/2-sort.000005.out 2>&1
    ./scripts/2-sort.sh 6 > ./logs/2-sort.000006.out 2>&1
    ./scripts/2-sort.sh 7 > ./logs/2-sort.000007.out 2>&1
    ./scripts/2-sort.sh 8 > ./logs/2-sort.000008.out 2>&1
    ./scripts/2-sort.sh 9 > ./logs/2-sort.000009.out 2>&1
    ./scripts/2-sort.sh 10 > ./logs/2-sort.000010.out 2>&1
    ./scripts/2-sort.sh 11 > ./logs/2-sort.000011.out 2>&1
    ./scripts/2-sort.sh 12 > ./logs/2-sort.000012.out 2>&1
    ./scripts/2-sort.sh 13 > ./logs/2-sort.000013.out 2>&1
    ./scripts/2-sort.sh 14 > ./logs/2-sort.000014.out 2>&1
    ./scripts/2-sort.sh 15 > ./logs/2-sort.000015.out 2>&1
    ./scripts/2-sort.sh 16 > ./logs/2-sort.000016.out 2>&1
    ./scripts/2-sort.sh 17 > ./logs/2-sort.000017.out 2>&1
    ./scripts/2-sort.sh 18 > ./logs/2-sort.000018.out 2>&1
    ./scripts/2-sort.sh 19 > ./logs/2-sort.000019.out 2>&1
    ./scripts/2-sort.sh 20 > ./logs/2-sort.000020.out 2>&1
    ./scripts/2-sort.sh 21 > ./logs/2-sort.000021.out 2>&1
    ./scripts/2-sort.sh 22 > ./logs/2-sort.000022.out 2>&1
    ./scripts/2-sort.sh 23 > ./logs/2-sort.000023.out 2>&1
    ./scripts/2-sort.sh 24 > ./logs/2-sort.000024.out 2>&1
    ./scripts/2-sort.sh 25 > ./logs/2-sort.000025.out 2>&1
    ./scripts/2-sort.sh 26 > ./logs/2-sort.000026.out 2>&1
    ./scripts/2-sort.sh 27 > ./logs/2-sort.000027.out 2>&1
    ./scripts/2-sort.sh 28 > ./logs/2-sort.000028.out 2>&1
    ./scripts/2-sort.sh 29 > ./logs/2-sort.000029.out 2>&1
    ./scripts/2-sort.sh 30 > ./logs/2-sort.000030.out 2>&1
    ./scripts/2-sort.sh 31 > ./logs/2-sort.000031.out 2>&1
    ./scripts/2-sort.sh 32 > ./logs/2-sort.000032.out 2>&1
    ./scripts/2-sort.sh 33 > ./logs/2-sort.000033.out 2>&1
    ./scripts/2-sort.sh 34 > ./logs/2-sort.000034.out 2>&1
    ./scripts/2-sort.sh 35 > ./logs/2-sort.000035.out 2>&1
    ./scripts/2-sort.sh 36 > ./logs/2-sort.000036.out 2>&1
    ./scripts/2-sort.sh 37 > ./logs/2-sort.000037.out 2>&1
    ./scripts/2-sort.sh 38 > ./logs/2-sort.000038.out 2>&1

Command two error

contigFilter 2 1000 0.75 0.75 2 num 5
-- Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
--
-- CITATIONS
--
-- Koren S, Walenz BP, Berlin K, Miller JR, Phillippy AM.
-- Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation.
-- Genome Res. 2017 May;27(5):722-736.
-- http://doi.org/10.1101/gr.215087.116
-- 
-- Read and contig alignments during correction, consensus and GFA building use:
--   Šošic M, Šikic M.
--   Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance.
--   Bioinformatics. 2017 May 1;33(9):1394-1395.
--   http://doi.org/10.1093/bioinformatics/btw753
-- 
-- Overlaps are generated using:
--   Berlin K, et al.
--   Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.
--   Nat Biotechnol. 2015 Jun;33(6):623-30.
--   http://doi.org/10.1038/nbt.3238
-- 
--   Myers EW, et al.
--   A Whole-Genome Assembly of Drosophila.
--   Science. 2000 Mar 24;287(5461):2196-204.
--   http://doi.org/10.1126/science.287.5461.2196
-- 
--   Li H.
--   Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences.
--   Bioinformatics. 2016 Jul 15;32(14):2103-10.
--   http://doi.org/10.1093/bioinformatics/btw152
-- 
-- Corrected read consensus sequences are generated using an algorithm derived from FALCON-sense:
--   Chin CS, et al.
--   Phased diploid genome assembly with single-molecule real-time sequencing.
--   Nat Methods. 2016 Dec;13(12):1050-1054.
--   http://doi.org/10.1038/nmeth.4035
-- 
-- Contig consensus sequences are generated using an algorithm derived from pbdagcon:
--   Chin CS, et al.
--   Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data.
--   Nat Methods. 2013 Jun;10(6):563-9
--   http://doi.org/10.1038/nmeth.2474
-- 
-- CONFIGURE CANU
--
-- Detected Java(TM) Runtime Environment '1.8.0_31' (from '/share/apps/jdk/1.8.0_31/bin/java').
-- Detected gnuplot version '4.6 patchlevel 5' (from 'gnuplot') and image format 'png'.
-- Detected 80 CPUs and 2020 gigabytes of memory.
-- Detected Slurm with 'sinfo' binary in /usr/bin/sinfo.
-- Grid engine disabled per useGrid=false option.
--
--                            (tag)Concurrency
--                     (tag)Threads          |
--            (tag)Memory         |          |
--        (tag)         |         |          |  algorithm
--        -------  ------  --------   --------  -----------------------------
-- Local: meryl     64 GB   16 CPUs x   5 jobs  (k-mer counting)
-- Local: cormhap   32 GB   16 CPUs x   5 jobs  (overlap detection with mhap)
-- Local: obtovl    12 GB    8 CPUs x  10 jobs  (overlap detection)
-- Local: utgovl    12 GB    8 CPUs x  10 jobs  (overlap detection)
-- Local: cor       20 GB    4 CPUs x  20 jobs  (read correction)
-- Local: ovb        4 GB    1 CPU  x  80 jobs  (overlap store bucketizer)
-- Local: ovs       16 GB    1 CPU  x  80 jobs  (overlap store sorting)
-- Local: red        8 GB    8 CPUs x  10 jobs  (read error detection)
-- Local: oea        2 GB    1 CPU  x  80 jobs  (overlap error adjustment)
-- Local: bat      256 GB   16 CPUs x   5 jobs  (contig construction)
-- Local: cns       48 GB    8 CPUs x  10 jobs  (consensus)
-- Local: gfa       16 GB   16 CPUs x   5 jobs  (GFA alignment and processing)
--
-- Generating assembly 'RA' in '/projects/'
--
-- Parameters:
--
--  genomeSize        500000000
--
--  Overlap Generation Limits:
--    corOvlErrorRate 0.2400 ( 24.00%)
--    obtOvlErrorRate 0.0450 (  4.50%)
--    utgOvlErrorRate 0.0450 (  4.50%)
--
--  Overlap Processing Limits:
--    corErrorRate    0.3000 ( 30.00%)
--    obtErrorRate    0.0450 (  4.50%)
--    utgErrorRate    0.0450 (  4.50%)
--    cnsErrorRate    0.0750 (  7.50%)
--
--
-- BEGIN CORRECTION
--
----------------------------------------
-- Starting command on Mon Jun 12 03:17:42 2017 with 511937.742 GB free disk space

    cd correction
    /people/whit040/canu/Linux-amd64/bin/gatekeeperCreate \
      -minlength 1000 \
      -o ./red_alder.gkpStore.BUILDING \
      ./red_alder.gkpStore.gkp \
    > ./red_alder.gkpStore.BUILDING.err 2>&1

-- Finished on Mon Jun 12 03:23:34 2017 (352 seconds) with 511896.441 GB free disk space
----------------------------------------
--
-- In gatekeeper store 'correction/red_alder.gkpStore':
--   Found 2811531 reads.
--   Found 21382870793 bases (42.76 times coverage).
--
--   Read length histogram (one '*' equals 4077.81 reads):
--        0    999      0 
--     1000   1999 285447 **********************************************************************
--     2000   2999 236550 **********************************************************
--     3000   3999 226336 *******************************************************
--     4000   4999 220844 ******************************************************
--     5000   5999 222772 ******************************************************
--     6000   6999 253917 **************************************************************
--     7000   7999 258272 ***************************************************************
--     8000   8999 222325 ******************************************************
--     9000   9999 177346 *******************************************
--    10000  10999 141145 **********************************
--    11000  11999 112454 ***************************
--    12000  12999  90117 **********************
--    13000  13999  72847 *****************
--    14000  14999  58551 **************
--    15000  15999  47776 ***********
--    16000  16999  38469 *********
--    17000  17999  30536 *******
--    18000  18999  24768 ******
--    19000  19999  19144 ****
--    20000  20999  15520 ***
--    21000  21999  12093 **
--    22000  22999   9417 **
--    23000  23999   7519 *
--    24000  24999   5851 *
--    25000  25999   4609 *
--    26000  26999   3560 
--    27000  27999   2800 
--    28000  28999   2142 
--    29000  29999   1722 
--    30000  30999   1279 
--    31000  31999   1050 
--    32000  32999    799 
--    33000  33999    632 
--    34000  34999    580 
--    35000  35999    431 
--    36000  36999    366 
--    37000  37999    282 
--    38000  38999    230 
--    39000  39999    176 
--    40000  40999    151 
--    41000  41999    144 
--    42000  42999     81 
--    43000  43999     85 
--    44000  44999     64 
--    45000  45999     69 
--    46000  46999     40 
--    47000  47999     46 
--    48000  48999     41 
--    49000  49999     24 
--    50000  50999     20 
--    51000  51999     15 
--    52000  52999     11 
--    53000  53999     11 
--    54000  54999     10 
--    55000  55999      8 
--    56000  56999      4 
--    57000  57999      4 
--    58000  58999      8 
--    59000  59999      4 
--    60000  60999      4 
--    61000  61999      0 
--    62000  62999      1 
--    63000  63999      2 
--    64000  64999      2 
--    65000  65999      3 
--    66000  66999      2 
--    67000  67999      1 
--    68000  68999      0 
--    69000  69999      1 
--    70000  70999      1 
-- Finished stage 'cor-gatekeeper', reset canuIteration.
-- Finished stage 'merylConfigure', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'meryl' concurrent execution on Mon Jun 12 03:24:14 2017 with 511880.694 GB free disk space (1 processes; 5 concurrently)

    cd correction/0-mercounts
    ./meryl.sh 1 > ./meryl.000001.out 2>&1

-- Finished on Mon Jun 12 04:07:37 2017 (2603 seconds) with 511492.925 GB free disk space
----------------------------------------
-- Meryl finished successfully.
-- Finished stage 'merylCheck', reset canuIteration.
--
--  16-mers                                                                                           Fraction
--    Occurrences   NumMers                                                                         Unique Total
--       1-     1 260923796 *******************************************                            0.1311 0.0122
--       2-     2 270090218 ********************************************                           0.2668 0.0375
--       3-     4 421559464 ********************************************************************** 0.3840 0.0703
--       5-     7 363844185 ************************************************************           0.5535 0.1406
--       8-    11 245285830 ****************************************                               0.7008 0.2349
--      12-    16 154436899 *************************                                              0.8047 0.3346
--      17-    22  95371117 ***************                                                        0.8726 0.4279
--      23-    29  59128886 *********                                                              0.9156 0.5090
--      30-    37  37246447 ******                                                                 0.9428 0.5767
--      38-    46  24008183 ***                                                                    0.9602 0.6321
--      47-    56  15893333 **                                                                     0.9716 0.6773
--      57-    67  10802170 *                                                                      0.9792 0.7142
--      68-    79   7562216 *                                                                      0.9844 0.7446
--      80-    92   5441502                                                                        0.9881 0.7699
--      93-   106   4013968                                                                        0.9907 0.7913
--     107-   121   3000712                                                                        0.9927 0.8097
--     122-   137   2274562                                                                        0.9942 0.8254
--     138-   154   1746320                                                                        0.9953 0.8390
--     155-   172   1349130                                                                        0.9961 0.8507
--     173-   191   1050865                                                                        0.9968 0.8609
--     192-   211    826698                                                                        0.9973 0.8698
--     212-   232    659717                                                                        0.9977 0.8775
--     233-   254    532960                                                                        0.9981 0.8843
--     255-   277    439248                                                                        0.9983 0.8903
--     278-   301    366645                                                                        0.9985 0.8958
--     302-   326    309908                                                                        0.9987 0.9007
--     327-   352    263225                                                                        0.9989 0.9052
--     353-   379    226015                                                                        0.9990 0.9094
--     380-   407    193906                                                                        0.9991 0.9132
--     408-   436    165653                                                                        0.9992 0.9168
--     437-   466    142045                                                                        0.9993 0.9201
--     467-   497    122302                                                                        0.9994 0.9230
--     498-   529    105394                                                                        0.9994 0.9258
--     530-   562     90636                                                                        0.9995 0.9283
--     563-   596     78960                                                                        0.9995 0.9306
--     597-   631     69050                                                                        0.9996 0.9328
--     632-   667     60613                                                                        0.9996 0.9347
--     668-   704     53835                                                                        0.9996 0.9366
--     705-   742     47774                                                                        0.9997 0.9383
--     743-   781     42084                                                                        0.9997 0.9399
--     782-   821     37483                                                                        0.9997 0.9414
--
--    16588315 (max occurrences)
-- 21079774032 (total mers, non-unique)
--  1729483449 (distinct mers, non-unique)
--   260923796 (unique mers)
-- For mhap overlapping, set repeat k-mer threshold to 213406.
--
-- Found 21340697828 16-mers; 1990407245 distinct and 260923796 unique.  Largest count 16588315.
-- Finished stage 'cor-meryl', reset canuIteration.
--
-- OVERLAPPER (mhap) (correction)
--
-- Set corMhapSensitivity=normal based on read coverage of 42.
--
-- PARAMETERS: hashes=512, minMatches=3, threshold=0.78
--
-- Given 32 GB, can fit 96000 reads per block.
-- For 31 blocks, set stride to 7 blocks.
-- Logging partitioning to 'correction/1-overlapper/partitioning.log'.
-- Configured 30 mhap precompute jobs.
-- Configured 76 mhap overlap jobs.
-- Finished stage 'cor-mhapConfigure', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'cormhap' concurrent execution on Mon Jun 12 04:09:40 2017 with 511534.106 GB free disk space (30 processes; 5 concurrently)

    cd correction/1-overlapper
    ./precompute.sh 1 > ./precompute.000001.out 2>&1
    ./precompute.sh 2 > ./precompute.000002.out 2>&1
    ./precompute.sh 3 > ./precompute.000003.out 2>&1
    ./precompute.sh 4 > ./precompute.000004.out 2>&1
    ./precompute.sh 5 > ./precompute.000005.out 2>&1
    ./precompute.sh 6 > ./precompute.000006.out 2>&1
    ./precompute.sh 7 > ./precompute.000007.out 2>&1
    ./precompute.sh 8 > ./precompute.000008.out 2>&1
    ./precompute.sh 9 > ./precompute.000009.out 2>&1
    ./precompute.sh 10 > ./precompute.000010.out 2>&1
    ./precompute.sh 11 > ./precompute.000011.out 2>&1
    ./precompute.sh 12 > ./precompute.000012.out 2>&1
    ./precompute.sh 13 > ./precompute.000013.out 2>&1
    ./precompute.sh 14 > ./precompute.000014.out 2>&1
    ./precompute.sh 15 > ./precompute.000015.out 2>&1
    ./precompute.sh 16 > ./precompute.000016.out 2>&1
    ./precompute.sh 17 > ./precompute.000017.out 2>&1
    ./precompute.sh 18 > ./precompute.000018.out 2>&1
    ./precompute.sh 19 > ./precompute.000019.out 2>&1
    ./precompute.sh 20 > ./precompute.000020.out 2>&1
    ./precompute.sh 21 > ./precompute.000021.out 2>&1
    ./precompute.sh 22 > ./precompute.000022.out 2>&1
    ./precompute.sh 23 > ./precompute.000023.out 2>&1
    ./precompute.sh 24 > ./precompute.000024.out 2>&1
    ./precompute.sh 25 > ./precompute.000025.out 2>&1
    ./precompute.sh 26 > ./precompute.000026.out 2>&1
    ./precompute.sh 27 > ./precompute.000027.out 2>&1
    ./precompute.sh 28 > ./precompute.000028.out 2>&1
    ./precompute.sh 29 > ./precompute.000029.out 2>&1
    ./precompute.sh 30 > ./precompute.000030.out 2>&1

-- Finished on Mon Jun 12 07:29:36 2017 (11996 seconds) with 509824.659 GB free disk space
----------------------------------------
-- All 30 mhap precompute jobs finished successfully.
-- Finished stage 'cor-mhapPrecomputeCheck', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'cormhap' concurrent execution on Mon Jun 12 07:29:36 2017 with 509824.659 GB free disk space (76 processes; 5 concurrently)

    cd correction/1-overlapper
    ./mhap.sh 1 > ./mhap.000001.out 2>&1
    ./mhap.sh 2 > ./mhap.000002.out 2>&1
    ./mhap.sh 3 > ./mhap.000003.out 2>&1
    ./mhap.sh 4 > ./mhap.000004.out 2>&1
    ./mhap.sh 5 > ./mhap.000005.out 2>&1
    ./mhap.sh 6 > ./mhap.000006.out 2>&1
    ./mhap.sh 7 > ./mhap.000007.out 2>&1
    ./mhap.sh 8 > ./mhap.000008.out 2>&1
    ./mhap.sh 9 > ./mhap.000009.out 2>&1
    ./mhap.sh 10 > ./mhap.000010.out 2>&1
    ./mhap.sh 11 > ./mhap.000011.out 2>&1
    ./mhap.sh 12 > ./mhap.000012.out 2>&1
    ./mhap.sh 13 > ./mhap.000013.out 2>&1
    ./mhap.sh 14 > ./mhap.000014.out 2>&1
    ./mhap.sh 15 > ./mhap.000015.out 2>&1
    ./mhap.sh 16 > ./mhap.000016.out 2>&1
    ./mhap.sh 17 > ./mhap.000017.out 2>&1
    ./mhap.sh 18 > ./mhap.000018.out 2>&1
    ./mhap.sh 19 > ./mhap.000019.out 2>&1
    ./mhap.sh 20 > ./mhap.000020.out 2>&1
    ./mhap.sh 21 > ./mhap.000021.out 2>&1
    ./mhap.sh 22 > ./mhap.000022.out 2>&1
    ./mhap.sh 23 > ./mhap.000023.out 2>&1
    ./mhap.sh 24 > ./mhap.000024.out 2>&1
    ./mhap.sh 25 > ./mhap.000025.out 2>&1
    ./mhap.sh 26 > ./mhap.000026.out 2>&1
    ./mhap.sh 27 > ./mhap.000027.out 2>&1
    ./mhap.sh 28 > ./mhap.000028.out 2>&1
    ./mhap.sh 29 > ./mhap.000029.out 2>&1
    ./mhap.sh 30 > ./mhap.000030.out 2>&1
    ./mhap.sh 31 > ./mhap.000031.out 2>&1
    ./mhap.sh 32 > ./mhap.000032.out 2>&1
    ./mhap.sh 33 > ./mhap.000033.out 2>&1
    ./mhap.sh 34 > ./mhap.000034.out 2>&1
    ./mhap.sh 35 > ./mhap.000035.out 2>&1
    ./mhap.sh 36 > ./mhap.000036.out 2>&1
    ./mhap.sh 37 > ./mhap.000037.out 2>&1
    ./mhap.sh 38 > ./mhap.000038.out 2>&1
    ./mhap.sh 39 > ./mhap.000039.out 2>&1
    ./mhap.sh 40 > ./mhap.000040.out 2>&1
    ./mhap.sh 41 > ./mhap.000041.out 2>&1
    ./mhap.sh 42 > ./mhap.000042.out 2>&1
    ./mhap.sh 43 > ./mhap.000043.out 2>&1
    ./mhap.sh 44 > ./mhap.000044.out 2>&1
    ./mhap.sh 45 > ./mhap.000045.out 2>&1
    ./mhap.sh 46 > ./mhap.000046.out 2>&1
    ./mhap.sh 47 > ./mhap.000047.out 2>&1
    ./mhap.sh 48 > ./mhap.000048.out 2>&1
    ./mhap.sh 49 > ./mhap.000049.out 2>&1
    ./mhap.sh 50 > ./mhap.000050.out 2>&1
    ./mhap.sh 51 > ./mhap.000051.out 2>&1
    ./mhap.sh 52 > ./mhap.000052.out 2>&1
    ./mhap.sh 53 > ./mhap.000053.out 2>&1
    ./mhap.sh 54 > ./mhap.000054.out 2>&1
    ./mhap.sh 55 > ./mhap.000055.out 2>&1
    ./mhap.sh 56 > ./mhap.000056.out 2>&1
    ./mhap.sh 57 > ./mhap.000057.out 2>&1
    ./mhap.sh 58 > ./mhap.000058.out 2>&1
    ./mhap.sh 59 > ./mhap.000059.out 2>&1
    ./mhap.sh 60 > ./mhap.000060.out 2>&1
    ./mhap.sh 61 > ./mhap.000061.out 2>&1
    ./mhap.sh 62 > ./mhap.000062.out 2>&1
    ./mhap.sh 63 > ./mhap.000063.out 2>&1
    ./mhap.sh 64 > ./mhap.000064.out 2>&1
    ./mhap.sh 65 > ./mhap.000065.out 2>&1
    ./mhap.sh 66 > ./mhap.000066.out 2>&1
    ./mhap.sh 67 > ./mhap.000067.out 2>&1
    ./mhap.sh 68 > ./mhap.000068.out 2>&1
    ./mhap.sh 69 > ./mhap.000069.out 2>&1
    ./mhap.sh 70 > ./mhap.000070.out 2>&1
    ./mhap.sh 71 > ./mhap.000071.out 2>&1
    ./mhap.sh 72 > ./mhap.000072.out 2>&1
    ./mhap.sh 73 > ./mhap.000073.out 2>&1
    ./mhap.sh 74 > ./mhap.000074.out 2>&1
    ./mhap.sh 75 > ./mhap.000075.out 2>&1
    ./mhap.sh 76 > ./mhap.000076.out 2>&1

-- Finished on Mon Jun 12 11:39:00 2017 (14964 seconds) with 507496.478 GB free disk space
----------------------------------------
-- Found 76 mhap overlap output files.
-- Finished stage 'cor-mhapCheck', reset canuIteration.
----------------------------------------
-- Starting command on Mon Jun 12 11:39:02 2017 with 507477.643 GB free disk space

    cd correction
    ./red_alder.ovlStore.BUILDING/scripts/0-config.sh \
    > ./red_alder.ovlStore.BUILDING/config.err 2>&1

-- Finished on Mon Jun 12 11:39:13 2017 (11 seconds) with 507498.173 GB free disk space
----------------------------------------
-- Finished stage 'cor-overlapStoreConfigure', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'ovB' concurrent execution on Mon Jun 12 11:39:14 2017 with 507498.173 GB free disk space (76 processes; 80 concurrently)

    cd correction/red_alder.ovlStore.BUILDING
    ./scripts/1-bucketize.sh 1 > ./logs/1-bucketize.000001.out 2>&1
    ./scripts/1-bucketize.sh 2 > ./logs/1-bucketize.000002.out 2>&1
    ./scripts/1-bucketize.sh 3 > ./logs/1-bucketize.000003.out 2>&1
    ./scripts/1-bucketize.sh 4 > ./logs/1-bucketize.000004.out 2>&1
    ./scripts/1-bucketize.sh 5 > ./logs/1-bucketize.000005.out 2>&1
    ./scripts/1-bucketize.sh 6 > ./logs/1-bucketize.000006.out 2>&1
    ./scripts/1-bucketize.sh 7 > ./logs/1-bucketize.000007.out 2>&1
    ./scripts/1-bucketize.sh 8 > ./logs/1-bucketize.000008.out 2>&1
    ./scripts/1-bucketize.sh 9 > ./logs/1-bucketize.000009.out 2>&1
    ./scripts/1-bucketize.sh 10 > ./logs/1-bucketize.000010.out 2>&1
    ./scripts/1-bucketize.sh 11 > ./logs/1-bucketize.000011.out 2>&1
    ./scripts/1-bucketize.sh 12 > ./logs/1-bucketize.000012.out 2>&1
    ./scripts/1-bucketize.sh 13 > ./logs/1-bucketize.000013.out 2>&1
    ./scripts/1-bucketize.sh 14 > ./logs/1-bucketize.000014.out 2>&1
    ./scripts/1-bucketize.sh 15 > ./logs/1-bucketize.000015.out 2>&1
    ./scripts/1-bucketize.sh 16 > ./logs/1-bucketize.000016.out 2>&1
    ./scripts/1-bucketize.sh 17 > ./logs/1-bucketize.000017.out 2>&1
    ./scripts/1-bucketize.sh 18 > ./logs/1-bucketize.000018.out 2>&1
    ./scripts/1-bucketize.sh 19 > ./logs/1-bucketize.000019.out 2>&1
    ./scripts/1-bucketize.sh 20 > ./logs/1-bucketize.000020.out 2>&1
    ./scripts/1-bucketize.sh 21 > ./logs/1-bucketize.000021.out 2>&1
    ./scripts/1-bucketize.sh 22 > ./logs/1-bucketize.000022.out 2>&1
    ./scripts/1-bucketize.sh 23 > ./logs/1-bucketize.000023.out 2>&1
    ./scripts/1-bucketize.sh 24 > ./logs/1-bucketize.000024.out 2>&1
    ./scripts/1-bucketize.sh 25 > ./logs/1-bucketize.000025.out 2>&1
    ./scripts/1-bucketize.sh 26 > ./logs/1-bucketize.000026.out 2>&1
    ./scripts/1-bucketize.sh 27 > ./logs/1-bucketize.000027.out 2>&1
    ./scripts/1-bucketize.sh 28 > ./logs/1-bucketize.000028.out 2>&1
    ./scripts/1-bucketize.sh 29 > ./logs/1-bucketize.000029.out 2>&1
    ./scripts/1-bucketize.sh 30 > ./logs/1-bucketize.000030.out 2>&1
    ./scripts/1-bucketize.sh 31 > ./logs/1-bucketize.000031.out 2>&1
    ./scripts/1-bucketize.sh 32 > ./logs/1-bucketize.000032.out 2>&1
    ./scripts/1-bucketize.sh 33 > ./logs/1-bucketize.000033.out 2>&1
    ./scripts/1-bucketize.sh 34 > ./logs/1-bucketize.000034.out 2>&1
    ./scripts/1-bucketize.sh 35 > ./logs/1-bucketize.000035.out 2>&1
    ./scripts/1-bucketize.sh 36 > ./logs/1-bucketize.000036.out 2>&1
    ./scripts/1-bucketize.sh 37 > ./logs/1-bucketize.000037.out 2>&1
    ./scripts/1-bucketize.sh 38 > ./logs/1-bucketize.000038.out 2>&1
    ./scripts/1-bucketize.sh 39 > ./logs/1-bucketize.000039.out 2>&1
    ./scripts/1-bucketize.sh 40 > ./logs/1-bucketize.000040.out 2>&1
    ./scripts/1-bucketize.sh 41 > ./logs/1-bucketize.000041.out 2>&1
    ./scripts/1-bucketize.sh 42 > ./logs/1-bucketize.000042.out 2>&1
    ./scripts/1-bucketize.sh 43 > ./logs/1-bucketize.000043.out 2>&1
    ./scripts/1-bucketize.sh 44 > ./logs/1-bucketize.000044.out 2>&1
    ./scripts/1-bucketize.sh 45 > ./logs/1-bucketize.000045.out 2>&1
    ./scripts/1-bucketize.sh 46 > ./logs/1-bucketize.000046.out 2>&1
    ./scripts/1-bucketize.sh 47 > ./logs/1-bucketize.000047.out 2>&1
    ./scripts/1-bucketize.sh 48 > ./logs/1-bucketize.000048.out 2>&1
    ./scripts/1-bucketize.sh 49 > ./logs/1-bucketize.000049.out 2>&1
    ./scripts/1-bucketize.sh 50 > ./logs/1-bucketize.000050.out 2>&1
    ./scripts/1-bucketize.sh 51 > ./logs/1-bucketize.000051.out 2>&1
    ./scripts/1-bucketize.sh 52 > ./logs/1-bucketize.000052.out 2>&1
    ./scripts/1-bucketize.sh 53 > ./logs/1-bucketize.000053.out 2>&1
    ./scripts/1-bucketize.sh 54 > ./logs/1-bucketize.000054.out 2>&1
    ./scripts/1-bucketize.sh 55 > ./logs/1-bucketize.000055.out 2>&1
    ./scripts/1-bucketize.sh 56 > ./logs/1-bucketize.000056.out 2>&1
    ./scripts/1-bucketize.sh 57 > ./logs/1-bucketize.000057.out 2>&1
    ./scripts/1-bucketize.sh 58 > ./logs/1-bucketize.000058.out 2>&1
    ./scripts/1-bucketize.sh 59 > ./logs/1-bucketize.000059.out 2>&1
    ./scripts/1-bucketize.sh 60 > ./logs/1-bucketize.000060.out 2>&1
    ./scripts/1-bucketize.sh 61 > ./logs/1-bucketize.000061.out 2>&1
    ./scripts/1-bucketize.sh 62 > ./logs/1-bucketize.000062.out 2>&1
    ./scripts/1-bucketize.sh 63 > ./logs/1-bucketize.000063.out 2>&1
    ./scripts/1-bucketize.sh 64 > ./logs/1-bucketize.000064.out 2>&1
    ./scripts/1-bucketize.sh 65 > ./logs/1-bucketize.000065.out 2>&1
    ./scripts/1-bucketize.sh 66 > ./logs/1-bucketize.000066.out 2>&1
    ./scripts/1-bucketize.sh 67 > ./logs/1-bucketize.000067.out 2>&1
    ./scripts/1-bucketize.sh 68 > ./logs/1-bucketize.000068.out 2>&1
    ./scripts/1-bucketize.sh 69 > ./logs/1-bucketize.000069.out 2>&1
    ./scripts/1-bucketize.sh 70 > ./logs/1-bucketize.000070.out 2>&1
    ./scripts/1-bucketize.sh 71 > ./logs/1-bucketize.000071.out 2>&1
    ./scripts/1-bucketize.sh 72 > ./logs/1-bucketize.000072.out 2>&1
    ./scripts/1-bucketize.sh 73 > ./logs/1-bucketize.000073.out 2>&1
    ./scripts/1-bucketize.sh 74 > ./logs/1-bucketize.000074.out 2>&1
    ./scripts/1-bucketize.sh 75 > ./logs/1-bucketize.000075.out 2>&1
    ./scripts/1-bucketize.sh 76 > ./logs/1-bucketize.000076.out 2>&1

-- Finished on Mon Jun 12 11:40:12 2017 (58 seconds) with 507383.73 GB free disk space
----------------------------------------
-- Overlap store bucketizer finished.
-- Finished stage 'cor-overlapStoreBucketizerCheck', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'ovS' concurrent execution on Mon Jun 12 11:40:12 2017 with 507383.73 GB free disk space (38 processes; 80 concurrently)

    cd correction/red_alder.ovlStore.BUILDING
    ./scripts/2-sort.sh 1 > ./logs/2-sort.000001.out 2>&1
    ./scripts/2-sort.sh 2 > ./logs/2-sort.000002.out 2>&1
    ./scripts/2-sort.sh 3 > ./logs/2-sort.000003.out 2>&1
    ./scripts/2-sort.sh 4 > ./logs/2-sort.000004.out 2>&1
    ./scripts/2-sort.sh 5 > ./logs/2-sort.000005.out 2>&1
    ./scripts/2-sort.sh 6 > ./logs/2-sort.000006.out 2>&1
    ./scripts/2-sort.sh 7 > ./logs/2-sort.000007.out 2>&1
    ./scripts/2-sort.sh 8 > ./logs/2-sort.000008.out 2>&1
    ./scripts/2-sort.sh 9 > ./logs/2-sort.000009.out 2>&1
    ./scripts/2-sort.sh 10 > ./logs/2-sort.000010.out 2>&1
    ./scripts/2-sort.sh 11 > ./logs/2-sort.000011.out 2>&1
    ./scripts/2-sort.sh 12 > ./logs/2-sort.000012.out 2>&1
    ./scripts/2-sort.sh 13 > ./logs/2-sort.000013.out 2>&1
    ./scripts/2-sort.sh 14 > ./logs/2-sort.000014.out 2>&1
    ./scripts/2-sort.sh 15 > ./logs/2-sort.000015.out 2>&1
    ./scripts/2-sort.sh 16 > ./logs/2-sort.000016.out 2>&1
    ./scripts/2-sort.sh 17 > ./logs/2-sort.000017.out 2>&1
    ./scripts/2-sort.sh 18 > ./logs/2-sort.000018.out 2>&1
    ./scripts/2-sort.sh 19 > ./logs/2-sort.000019.out 2>&1
    ./scripts/2-sort.sh 20 > ./logs/2-sort.000020.out 2>&1
    ./scripts/2-sort.sh 21 > ./logs/2-sort.000021.out 2>&1
    ./scripts/2-sort.sh 22 > ./logs/2-sort.000022.out 2>&1
    ./scripts/2-sort.sh 23 > ./logs/2-sort.000023.out 2>&1
    ./scripts/2-sort.sh 24 > ./logs/2-sort.000024.out 2>&1
    ./scripts/2-sort.sh 25 > ./logs/2-sort.000025.out 2>&1
    ./scripts/2-sort.sh 26 > ./logs/2-sort.000026.out 2>&1
    ./scripts/2-sort.sh 27 > ./logs/2-sort.000027.out 2>&1
    ./scripts/2-sort.sh 28 > ./logs/2-sort.000028.out 2>&1
    ./scripts/2-sort.sh 29 > ./logs/2-sort.000029.out 2>&1
    ./scripts/2-sort.sh 30 > ./logs/2-sort.000030.out 2>&1
    ./scripts/2-sort.sh 31 > ./logs/2-sort.000031.out 2>&1
    ./scripts/2-sort.sh 32 > ./logs/2-sort.000032.out 2>&1
    ./scripts/2-sort.sh 33 > ./logs/2-sort.000033.out 2>&1
    ./scripts/2-sort.sh 34 > ./logs/2-sort.000034.out 2>&1
    ./scripts/2-sort.sh 35 > ./logs/2-sort.000035.out 2>&1
    ./scripts/2-sort.sh 36 > ./logs/2-sort.000036.out 2>&1
    ./scripts/2-sort.sh 37 > ./logs/2-sort.000037.out 2>&1
    ./scripts/2-sort.sh 38 > ./logs/2-sort.000038.out 2>&1

STOPS

I have no problem with my bacterial genomes. Please let me know what else you need to figure this out asap. Help!

brianwalenz commented 7 years ago

1) What are the logs in the ovlStore.BUILDING directory showing (logs/2-sort.*.out)?

2) You can switch to the sequential store build method with ovsMethod=sequential. The parallel method, default for large genomes, is designed for a grid, not a single machine. Before restarting, remove the *BUILDING directories.

raw937 commented 7 years ago
 more 2-sort.000038.out 
Running job 38 based on command line options.
Changed max processes per user from 1024 to 16546621 (max 16546621).

Max open files limited to 4096, no increase possible.

Found 2656787 overlaps from './bucket0004/sliceSizes'.
Found 1894325 overlaps from './bucket0005/sliceSizes'.
Found 4928840 overlaps from './bucket0009/sliceSizes'.
Found 4870495 overlaps from './bucket0013/sliceSizes'.
Found 4411301 overlaps from './bucket0017/sliceSizes'.
Found 5034296 overlaps from './bucket0021/sliceSizes'.
Found 4364715 overlaps from './bucket0025/sliceSizes'.
Found 4028546 overlaps from './bucket0029/sliceSizes'.
Found 2275489 overlaps from './bucket0032/sliceSizes'.
Found 1623086 overlaps from './bucket0033/sliceSizes'.
Found 3402140 overlaps from './bucket0036/sliceSizes'.
Found 3314917 overlaps from './bucket0039/sliceSizes'.
Found 3574978 overlaps from './bucket0042/sliceSizes'.
Found 3779354 overlaps from './bucket0045/sliceSizes'.
Found 3168347 overlaps from './bucket0048/sliceSizes'.
Found 2974655 overlaps from './bucket0051/sliceSizes'.
Found 1847514 overlaps from './bucket0053/sliceSizes'.
Found 1320656 overlaps from './bucket0054/sliceSizes'.
Found 3790374 overlaps from './bucket0056/sliceSizes'.
Found 4158604 overlaps from './bucket0058/sliceSizes'.
Found 4270118 overlaps from './bucket0060/sliceSizes'.
Found 3932840 overlaps from './bucket0062/sliceSizes'.
Found 4323002 overlaps from './bucket0064/sliceSizes'.
Found 4264884 overlaps from './bucket0066/sliceSizes'.
Found 2447468 overlaps from './bucket0067/sliceSizes'.
Found 1757034 overlaps from './bucket0068/sliceSizes'.
Found 4112867 overlaps from './bucket0069/sliceSizes'.
Found 4968781 overlaps from './bucket0070/sliceSizes'.
Found 5241208 overlaps from './bucket0071/sliceSizes'.
Found 5101863 overlaps from './bucket0072/sliceSizes'.
Found 4713074 overlaps from './bucket0073/sliceSizes'.
Found 4740250 overlaps from './bucket0074/sliceSizes'.
Found 5277432 overlaps from './bucket0075/sliceSizes'.
Found 591954 overlaps from './bucket0076/sliceSizes'.
Overlaps need 3.67 GB memory, allowed to use up to (via -M) 4 GB.
Loading 2656787 overlaps from './bucket0004/slice0038'.
Loading 1894325 overlaps from './bucket0005/slice0038'.
Loading 4928840 overlaps from './bucket0009/slice0038'.
Loading 4870495 overlaps from './bucket0013/slice0038'.
Loading 4411301 overlaps from './bucket0017/slice0038'.
Loading 5034296 overlaps from './bucket0021/slice0038'.
Loading 4364715 overlaps from './bucket0025/slice0038'.
Loading 4028546 overlaps from './bucket0029/slice0038'.
Loading 2275489 overlaps from './bucket0032/slice0038'.
Loading 1623086 overlaps from './bucket0033/slice0038'.
Loading 3402140 overlaps from './bucket0036/slice0038'.
Loading 3314917 overlaps from './bucket0039/slice0038'.
Loading 3574978 overlaps from './bucket0042/slice0038'.
Loading 3779354 overlaps from './bucket0045/slice0038'.
Loading 3168347 overlaps from './bucket0048/slice0038'.
Loading 2974655 overlaps from './bucket0051/slice0038'.
Loading 1847514 overlaps from './bucket0053/slice0038'.
Loading 1320656 overlaps from './bucket0054/slice0038'.
Loading 3790374 overlaps from './bucket0056/slice0038'.
Loading 4158604 overlaps from './bucket0058/slice0038'.
Loading 4270118 overlaps from './bucket0060/slice0038'.
Loading 3932840 overlaps from './bucket0062/slice0038'.
Loading 4323002 overlaps from './bucket0064/slice0038'.
Loading 4264884 overlaps from './bucket0066/slice0038'.
Loading 2447468 overlaps from './bucket0067/slice0038'.
Loading 1757034 overlaps from './bucket0068/slice0038'.
Loading 4112867 overlaps from './bucket0069/slice0038'.
Loading 4968781 overlaps from './bucket0070/slice0038'.
Loading 5241208 overlaps from './bucket0071/slice0038'.
Loading 5101863 overlaps from './bucket0072/slice0038'.
Loading 4713074 overlaps from './bucket0073/slice0038'.
Loading 4740250 overlaps from './bucket0074/slice0038'.
Loading 5277432 overlaps from './bucket0075/slice0038'.
Loading 591954 overlaps from './bucket0076/slice0038'.
Sorting.
Writing output.
Writing 123162194 overlaps.
Created ovStore segment './0038' with 0 overlaps for reads from 4294967295 to 0.
Success.

more 3-index.err 
Processing './index'
  Now finished with fragments 1 to 67696 -- 123466811 overlaps.
Processing './0001.index'
  Now finished with fragments 1 to 133414 -- 246935768 overlaps.
Processing './0002.index'
  Now finished with fragments 1 to 196869 -- 370401635 overlaps.
Processing './0003.index'
  Now finished with fragments 1 to 260451 -- 493878946 overlaps.
Processing './0004.index'
  Now finished with fragments 1 to 330782 -- 617344112 overlaps.
Processing './0005.index'
  Now finished with fragments 1 to 396970 -- 740820634 overlaps.
Processing './0006.index'
  Now finished with fragments 1 to 458651 -- 864284310 overlaps.
Processing './0007.index'
  Now finished with fragments 1 to 522939 -- 987758875 overlaps.
Processing './0008.index'
  Now finished with fragments 1 to 598646 -- 1111227410 overlaps.
Processing './0009.index'
  Now finished with fragments 1 to 677591 -- 1234691607 overlaps.
Processing './0010.index'
  Now finished with fragments 1 to 756620 -- 1358165716 overlaps.
Processing './0011.index'
  Now finished with fragments 1 to 847148 -- 1481628385 overlaps.
Processing './0012.index'
  Now finished with fragments 1 to 937771 -- 1605092755 overlaps.
Processing './0013.index'
  Now finished with fragments 1 to 1030865 -- 1728556115 overlaps.
Processing './0014.index'
  Adding empty records for fragments 1030866 to 1030866
  Now finished with fragments 1 to 1114823 -- 1852028753 overlaps.
Processing './0015.index'
  Now finished with fragments 1 to 1200380 -- 1975492527 overlaps.
Processing './0016.index'
  Now finished with fragments 1 to 1305443 -- 2098957620 overlaps.
Processing './0017.index'
  Adding empty records for fragments 1305444 to 1305444
  Now finished with fragments 1 to 1408924 -- 2222423818 overlaps.
Processing './0018.index'
  Now finished with fragments 1 to 1497878 -- 2345900175 overlaps.
Processing './0019.index'
  Now finished with fragments 1 to 1575304 -- 2469370272 overlaps.
Processing './0020.index'
  Now finished with fragments 1 to 1648482 -- 2592832864 overlaps.
Processing './0021.index'
  Now finished with fragments 1 to 1720847 -- 2716314934 overlaps.
Processing './0022.index'
  Now finished with fragments 1 to 1799472 -- 2839782547 overlaps.
Processing './0023.index'
  Now finished with fragments 1 to 1872373 -- 2963264380 overlaps.
Processing './0024.index'
  Now finished with fragments 1 to 1942609 -- 3086747733 overlaps.
Processing './0025.index'
  Now finished with fragments 1 to 2017641 -- 3210221197 overlaps.
Processing './0026.index'
  Now finished with fragments 1 to 2092437 -- 3333685594 overlaps.
Processing './0027.index'
  Now finished with fragments 1 to 2171427 -- 3457155061 overlaps.
Processing './0028.index'
  Now finished with fragments 1 to 2238045 -- 3580655075 overlaps.
Processing './0029.index'
  Now finished with fragments 1 to 2298075 -- 3704126650 overlaps.
Processing './0030.index'
  Now finished with fragments 1 to 2358130 -- 3827598424 overlaps.
Processing './0031.index'
  Now finished with fragments 1 to 2419099 -- 3951073419 overlaps.
Processing './0032.index'
  Now finished with fragments 1 to 2477702 -- 4074537131 overlaps.
Processing './0033.index'
  Now finished with fragments 1 to 2545188 -- 4198019102 overlaps.
Processing './0034.index'
  Now finished with fragments 1 to 2608261 -- 4321490855 overlaps.
Processing './0035.index'
  Now finished with fragments 1 to 2675044 -- 4444953536 overlaps.
Processing './0036.index'
  Now finished with fragments 1 to 2742123 -- 4568416206 overlaps.
Processing './0037.index'
  Now finished with fragments 1 to 2811531 -- 4691578400 overlaps.
Created ovStore '.' with 0 overlaps for reads from 4294967295 to 0.

Removing intermediate files.
Finished.
Success.

 more config.err 
Changed max processes per user from 1024 to 16546621 (max 16546621).

Max open files limited to 4096, no increase possible.

Found 4691578400 (4691.58 million) overlaps.
Configuring for 4.00 GB to 16.00 GB memory and 4080 open files.
Will sort using 38 files; 125829120 (125.83 million) overlaps per bucket; 4.00 GB memory per bucket
  bucket    1 has 123466811 olaps.
  bucket    2 has 123468957 olaps.
  bucket    3 has 123465867 olaps.
  bucket    4 has 123477311 olaps.
  bucket    5 has 123465166 olaps.
  bucket    6 has 123476522 olaps.
  bucket    7 has 123463676 olaps.
  bucket    8 has 123474565 olaps.
  bucket    9 has 123468535 olaps.
  bucket   10 has 123464197 olaps.
  bucket   11 has 123474109 olaps.
  bucket   12 has 123462669 olaps.
  bucket   13 has 123464370 olaps.
  bucket   14 has 123463360 olaps.
  bucket   15 has 123472638 olaps.
  bucket   16 has 123463774 olaps.
  bucket   17 has 123465093 olaps.
  bucket   18 has 123466198 olaps.
  bucket   19 has 123476357 olaps.
  bucket   20 has 123470097 olaps.
  bucket   21 has 123462592 olaps.
  bucket   22 has 123482070 olaps.
  bucket   23 has 123467613 olaps.
  bucket   24 has 123481833 olaps.
  bucket   25 has 123483353 olaps.
  bucket   26 has 123473464 olaps.
  bucket   27 has 123464397 olaps.
  bucket   28 has 123469467 olaps.
  bucket   29 has 123500014 olaps.
  bucket   30 has 123471575 olaps.
  bucket   31 has 123471774 olaps.
  bucket   32 has 123474995 olaps.
  bucket   33 has 123463712 olaps.
  bucket   34 has 123481971 olaps.
  bucket   35 has 123471753 olaps.
  bucket   36 has 123462681 olaps.
  bucket   37 has 123462670 olaps.
  bucket   38 has 123162194 olaps.
Will sort 123.463 million overlaps per bucket, using 38 buckets 3.93 GB per bucket.

-  Saved configuration to './red_alder.ovlStore.BUILDING/config'.

We have 80 threads and 2 TB of memory total.

Let me know if these are the right logs?

Would this be the proper command? canu -p RA -d RA-auto genomeSize=500m -pacbio-raw All_pacbio_data.fastq useGrid=false ovsMethod=sequential

Let me know when you can?

Cheers Rick

brianwalenz commented 7 years ago

Those are the right logs. Everything looks file.

1) Try just restarting canu (without ovsMethod=sequential). It should notice the work is done and continue.

2) If that fails, you might as well go ahead with the ovsMethod=sequential. The command looks correct.

For both, also give it saveOverlaps=true. This will save intermediate files, which could save you from recomputing overlaps if something goes horribly wrong.

raw937 commented 7 years ago
-- Finished on Mon Jun 12 22:07:54 2017 (176 seconds) with 503829.656 GB free disk space
----------------------------------------
-- Overlap store sorter finished.
-- Finished stage 'cor-overlapStoreSorterCheck', reset canuIteration.
----------------------------------------
-- Starting command on Mon Jun 12 22:07:54 2017 with 503829.656 GB free disk space

    cd correction/red_alder.ovlStore.BUILDING
    ./scripts/3-index.sh \
    > ./logs/3-index.err 2>&1

-- Finished on Mon Jun 12 22:08:01 2017 (7 seconds) with 503833.804 GB free disk space
----------------------------------------
--
-- Overlap store 'correction/red_alder.ovlStore' successfully constructed.
--
-- Purged 117.777 GB in 182 overlap output files and 2 directories.
-- Overlap store 'correction/ra.ovlStore' statistics not available (skipped in correction and trimming stages).
-- Finished stage 'cor-createOverlapStore', reset canuIteration.
-- Set corMinCoverage=4 based on read coverage of 42.
-- Computing global filter scores 'correction/2-correction/ra.globalScores'.

STOPPED then I tried re-starting twice

errors after re-starting:

-- BEGIN CORRECTION
--
-- Set corMinCoverage=4 based on read coverage of 42.
-- Computing global filter scores 'correction/2-correction/ra.globalScores'.
----------------------------------------
-- Starting command on Mon Jun 12 22:16:00 2017 with 503992.637 GB free disk space

    cd correction/2-correction
    /people/canu/Linux-amd64/bin/filterCorrectionOverlaps \
      -G ../ra.gkpStore \
      -O ../raovlStore \
      -S ./ra.globalScores.WORKING \
      -c 40 \
      -l 0 \
    > ./ra.globalScores.err 2>&1
sh: line 6: 27314 Aborted                 /people/canu/Linux-amd64/bin/filterCorrectionOverlaps -G ../red_alder.gkpStore -O ../ra.ovlStore -S ./ra.globalSco
res.WORKING -c 40 -l 0 > ./ra.globalScores.err 2>&1

-- Finished on Mon Jun 12 22:22:08 2017 (368 seconds) with 503994.806 GB free disk space
----------------------------------------
ERROR:
ERROR:  Failed with exit code 134.  (rc=34304)
ERROR:

ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting.  If that doesn't work, ask for help.
ABORT:
ABORT:   failed to globally filter overlaps for correction.
ABORT:
ABORT: Disk space available:  503994.806 GB
ABORT:
ABORT: Last 50 lines of the relevant log file (correction/2-correction/ra.globalScores.err):
ABORT:
ABORT:

-- BEGIN CORRECTION
--
-- Set corMinCoverage=4 based on read coverage of 42.
-- Computing global filter scores 'correction/2-correction/red_alder.globalScores'.
----------------------------------------
-- Starting command on Mon Jun 12 22:23:59 2017 with 503992.989 GB free disk space

    cd correction/2-correction
    /people/canu/Linux-amd64/bin/filterCorrectionOverlaps \
      -G ../ra.gkpStore \
      -O ../rar.ovlStore \
      -S ./ra.globalScores.WORKING \
      -c 40 \
      -l 0 \
    > ./ra_globalScores.err 2>&1
sh: line 6: 28773 Aborted                 /people/canu/Linux-amd64/bin/filterCorrectionOverlaps -G ../red_alder.gkpStore -O ../raovlStore -S ./red_alder.globalSco
res.WORKING -c 40 -l 0 > ./ra.globalScores.err 2>&1

-- Finished on Mon Jun 12 22:30:10 2017 (371 seconds) with 503986.441 GB free disk space
----------------------------------------
ERROR:
ERROR:  Failed with exit code 134.  (rc=34304)
ERROR:

ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting.  If that doesn't work, ask for help.
ABORT:
ABORT:   failed to globally filter overlaps for correction.
ABORT:
ABORT: Disk space available:  503986.441 GB
ABORT:
ABORT: Last 50 lines of the relevant log file (correction/2-correction/ra.globalScores.err):
ABORT:
ABORT:
[whit040
skoren commented 7 years ago

I'm going to guess the store is corrupt because of the initial stop (which didn't report any error so it's not clear why that was interrupted). What does the correction/2-correction/ra.globalScores.err log report?

raw937 commented 7 years ago

more ra.globalScores.err filterCorrectionOverlaps: stores/ovStoreFile.C:342: bool ovFile::readOverlap(ovOverlap*): Assertion `_bufferPos <= _bufferLen' failed.

raw937 commented 7 years ago

The other two times I have run it that file is empty.

skoren commented 7 years ago

This definitely looks like some kind of corruption in the store. My guess is the initial stop was because the parallel building overloaded your disk and caused an I/O error.

I would remove the asm.ovlStore folder and 2-correction and run with ovsMethod=sequential saveOverlaps=true. As long as you still have output in correction/1-overlapper/results it shouldn't have to recompute any overlaps just re-build the store using a single process this time.

raw937 commented 7 years ago

I tested a new command from the beginning. I got pretty far just failed at the last step.

module load java/1.8.0_31

canu -p ra_error -d ra_error_dg genomeSize=500m -pacbio-raw All_pacbio_data.fastq useGrid=false corMaxEvidenceErate=0.15 

It failed at the final assembly step. 

-- Finished on Tue Jun 13 20:35:03 2017 (2 seconds) with 503658.8 GB free disk space
----------------------------------------
-- Purging consensus output after loading to ctgStore and/or utgStore.
----------------------------------------
-- Starting command on Tue Jun 13 20:35:03 2017 with 503658.8 GB free disk space

    cd unitigging
    /people/whit040/canu/Linux-amd64/bin/tgStoreDump \
      -G ./ra_error.gkpStore \
      -T ./ra_error.ctgStore 2 \
      -sizes -s 500000000 \
    > ./ra_error.ctgStore/seqDB.v002.sizes.txt

-- Finished on Tue Jun 13 20:35:15 2017 (12 seconds) with 503663.159 GB free disk space
----------------------------------------
-- Found, in version 2, after consensus generation:
--   contigs:      6762 sequences, total length 246709839 bp (including 424 repeats of total length 3892072 bp).
--   bubbles:      0 sequences, total length 0 bp.
--   unassembled:  329283 sequences, total length 1445213000 bp.
--
-- Contig sizes based on genome size --
--            NG (bp)  LG (contigs)    sum (bp)
--         ----------  ------------  ----------
--     10       69433           541    50063460
--     20       49091          1406   100009084
--     30       36036          2600   150016641
--     40       25889          4228   200007928
--
-- Finished stage 'consensusLoad', reset canuIteration.
----------------------------------------
-- Starting command on Tue Jun 13 20:35:15 2017 with 503663.159 GB free disk space

    cd unitigging
    /people/whit040/canu/Linux-amd64/bin/tgStoreCoverageStat \
      -G ./ra_error.gkpStore \
      -T ./ra_error.ctgStore 2 \
      -s 500000000 \
      -o ./ra_error.ctgStore.coverageStat \
    > ./ra_error.ctgStore.coverageStat.err 2>&1

-- Finished on Tue Jun 13 20:35:19 2017 (4 seconds) with 503662.455 GB free disk space
----------------------------------------
-- Finished stage 'consensusAnalyze', reset canuIteration.
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Tue Jun 13 20:35:19 2017 with 503662.455 GB free disk space (1 processes; 5 concurrently)

    cd unitigging/4-unitigger
    ./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1

-- Finished on Tue Jun 13 20:37:34 2017 (135 seconds) with 503665.004 GB free disk space
----------------------------------------
--
-- Running jobs.  Second attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Tue Jun 13 20:37:34 2017 with 503665.004 GB free disk space (1 processes; 5 concurrently)

    cd unitigging/4-unitigger
    ./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1

-- Finished on Tue Jun 13 20:38:29 2017 (55 seconds) with 503664.327 GB free disk space
----------------------------------------
--
-- GFA alignment failed.
--

ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting.  If that doesn't work, ask for help.
ABORT:
ABORT:   canu iteration count too high, stopping pipeline (most likely a problem in the grid-based computes).
ABORT:

Any thoughts? I have it ready for re-start.

skoren commented 7 years ago

You probably hit the same bug as #527, update to the latest code and re-start, Canu should run to completion. I will note you don't necessarily need the alignGFA step unless you're planning to work with the Canu graph outputs. You can get the assembled contigs directly using the tgStoreDump command: http://canu.readthedocs.io/en/latest/commands/tgStoreDump.html

raw937 commented 7 years ago

Tried to restart same thing:

--  genomeSize        500000000
--
--  Overlap Generation Limits:
--    corOvlErrorRate 0.2400 ( 24.00%)
--    obtOvlErrorRate 0.0450 (  4.50%)
--    utgOvlErrorRate 0.0450 (  4.50%)
--
--  Overlap Processing Limits:
--    corErrorRate    0.3000 ( 30.00%)
--    obtErrorRate    0.0450 (  4.50%)
--    utgErrorRate    0.0450 (  4.50%)
--    cnsErrorRate    0.0750 (  7.50%)
--
--
-- BEGIN ASSEMBLY
--
--
-- Running jobs.  First attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Wed Jun 14 10:20:24 2017 with 504166.109 GB free disk space (1 processes; 5 concurrently)

    cd unitigging/4-unitigger
    ./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1

-- Finished on Wed Jun 14 10:21:24 2017 (60 seconds) with 504172.133 GB free disk space
----------------------------------------
--
-- Running jobs.  Second attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Wed Jun 14 10:21:24 2017 with 504172.133 GB free disk space (1 processes; 5 concurrently)

    cd unitigging/4-unitigger
    ./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1

-- Finished on Wed Jun 14 10:22:19 2017 (55 seconds) with 504173.107 GB free disk space
----------------------------------------
--
-- GFA alignment failed.
--

ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting.  If that doesn't work, ask for help.
ABORT:
ABORT:   canu iteration count too high, stopping pipeline (most likely a problem in the grid-based computes).
ABORT:

This is the version I am using:

contigFilter 2 1000 0.75 0.75 2 num 5
-- Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)

Is this the correct version?

raw937 commented 7 years ago

I am using canu-1.5.Linux-amd64.tar.xz

?

raw937 commented 7 years ago

I would like to use this command usage: tgStoreDump -G -T [opts] Which folder are these in? There is a gkpStore in unitigging, trimming and correction? Also, can't find the tigStore.

Could you provide an example command? That would be great.

THANK YOU!

brianwalenz commented 7 years ago

Nope, that's not the latest. See 'Installing' at http://canu.readthedocs.io/en/latest/ that uses 'git clone'.

If you don't need the gfa results, create four empty files for the gfa outputs.

touch unitigging/4-unitigger/ra_error.unitigs.aligned.gfa
touch unitigging/4-unitigger/ra_error.contigs.aligned.gfa
touch unitigging/4-unitigger/ra_error.unitigs.aligned.bed
touch unitigging/4-unitigger/ra_error.unitigs.aligned.bed.gfa

then restart The file names are listed at the end of unitigging/4-unitigger/alignGFA.sh. If those four files exist, regardless of size, the alignGFA step will be skipped and outputs for the assembly generated.

'tigStore' is the generic name for the ctgStore (contigs) and utgStore (unitigs). That command does a lot, so I can't really give a complete example, but

tgStoreDump -G *gkpStore -T *ctgStore 2 -consensus

will dump consensus sequences for contigs.

raw937 commented 7 years ago

What are the gfa results?

I ran the command in the unitigging folder: tgStoreDump -G gkpStore -T ctgStore 2 -consensus >contigs_out.fasta

?

These aren't contigs? What are they? sorry.

skoren commented 7 years ago

GFA is the graphical output from the assembler (https://github.com/GFA-spec/GFA-spec).

The dump command will output contigs/unitigs/and unassembled reads. So yes, those are contigs along with other assembly outputs. You can see the type of output from the header:

tig00000001 len=1358507 reads=4870 covStat=5495.26 gappedBases=no class=contig suggestRepeat=no suggestCircular=no
tig00094448 len=3278 reads=1 covStat=0.00 gappedBases=no class=unassm suggestRepeat=no suggestCircular=no

To get just contigs, add the -contigs option to your command.

raw937 commented 7 years ago

Thank you both so much! And, being patient with me - much appreciated.

I did get some contig outputs using the tgstoredump. I have recently re-installed and I am running again to see if will finish to completion.

I will let you know.

Is it possible to add in illumina contigs in a hybrid fashion? Or reads?

raw937 commented 7 years ago

It completed successfully! Thank you.

The .bed file was empty.

The hybrid with Illumina any thoughts?

skoren commented 7 years ago

There's no support for Illumina data in Canu so you could try to merge an Illumina assembly and a PacBio assembly using a third-party tool but that isn't likely to improve your result. You could use the Illumina data to polish the final consensus which is what we normally do.

raw937 commented 7 years ago

What's your favorite polishing program?

Cheers and many thanks Rick

On Wed, Jun 14, 2017 at 1:15 PM, Sergey Koren notifications@github.com wrote:

There's no support for Illumina data in Canu so you could try to merge an Illumina assembly and a PacBio assembly but that isn't likely to improve your result. You could use the Illumina data to polish the final consensus which is what we normally do.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/marbl/canu/issues/525#issuecomment-308545328, or mute the thread https://github.com/notifications/unsubscribe-auth/ALWiqEJQNL7boYJDsYRTmBZDj84J3v7Hks5sED9LgaJpZM4N3r-n .

skoren commented 7 years ago

Since you have PacBio data, Quiver/Arrow should be first. For Illumina data, we typically use Pilon.

skoren commented 7 years ago

Since the asm finished I'm closing this original issue. If you have another issue or an error with another run please open a new issue.