Closed raw937 closed 7 years ago
1) What are the logs in the ovlStore.BUILDING directory showing (logs/2-sort.*.out)?
2) You can switch to the sequential store build method with ovsMethod=sequential. The parallel method, default for large genomes, is designed for a grid, not a single machine. Before restarting, remove the *BUILDING directories.
more 2-sort.000038.out
Running job 38 based on command line options.
Changed max processes per user from 1024 to 16546621 (max 16546621).
Max open files limited to 4096, no increase possible.
Found 2656787 overlaps from './bucket0004/sliceSizes'.
Found 1894325 overlaps from './bucket0005/sliceSizes'.
Found 4928840 overlaps from './bucket0009/sliceSizes'.
Found 4870495 overlaps from './bucket0013/sliceSizes'.
Found 4411301 overlaps from './bucket0017/sliceSizes'.
Found 5034296 overlaps from './bucket0021/sliceSizes'.
Found 4364715 overlaps from './bucket0025/sliceSizes'.
Found 4028546 overlaps from './bucket0029/sliceSizes'.
Found 2275489 overlaps from './bucket0032/sliceSizes'.
Found 1623086 overlaps from './bucket0033/sliceSizes'.
Found 3402140 overlaps from './bucket0036/sliceSizes'.
Found 3314917 overlaps from './bucket0039/sliceSizes'.
Found 3574978 overlaps from './bucket0042/sliceSizes'.
Found 3779354 overlaps from './bucket0045/sliceSizes'.
Found 3168347 overlaps from './bucket0048/sliceSizes'.
Found 2974655 overlaps from './bucket0051/sliceSizes'.
Found 1847514 overlaps from './bucket0053/sliceSizes'.
Found 1320656 overlaps from './bucket0054/sliceSizes'.
Found 3790374 overlaps from './bucket0056/sliceSizes'.
Found 4158604 overlaps from './bucket0058/sliceSizes'.
Found 4270118 overlaps from './bucket0060/sliceSizes'.
Found 3932840 overlaps from './bucket0062/sliceSizes'.
Found 4323002 overlaps from './bucket0064/sliceSizes'.
Found 4264884 overlaps from './bucket0066/sliceSizes'.
Found 2447468 overlaps from './bucket0067/sliceSizes'.
Found 1757034 overlaps from './bucket0068/sliceSizes'.
Found 4112867 overlaps from './bucket0069/sliceSizes'.
Found 4968781 overlaps from './bucket0070/sliceSizes'.
Found 5241208 overlaps from './bucket0071/sliceSizes'.
Found 5101863 overlaps from './bucket0072/sliceSizes'.
Found 4713074 overlaps from './bucket0073/sliceSizes'.
Found 4740250 overlaps from './bucket0074/sliceSizes'.
Found 5277432 overlaps from './bucket0075/sliceSizes'.
Found 591954 overlaps from './bucket0076/sliceSizes'.
Overlaps need 3.67 GB memory, allowed to use up to (via -M) 4 GB.
Loading 2656787 overlaps from './bucket0004/slice0038'.
Loading 1894325 overlaps from './bucket0005/slice0038'.
Loading 4928840 overlaps from './bucket0009/slice0038'.
Loading 4870495 overlaps from './bucket0013/slice0038'.
Loading 4411301 overlaps from './bucket0017/slice0038'.
Loading 5034296 overlaps from './bucket0021/slice0038'.
Loading 4364715 overlaps from './bucket0025/slice0038'.
Loading 4028546 overlaps from './bucket0029/slice0038'.
Loading 2275489 overlaps from './bucket0032/slice0038'.
Loading 1623086 overlaps from './bucket0033/slice0038'.
Loading 3402140 overlaps from './bucket0036/slice0038'.
Loading 3314917 overlaps from './bucket0039/slice0038'.
Loading 3574978 overlaps from './bucket0042/slice0038'.
Loading 3779354 overlaps from './bucket0045/slice0038'.
Loading 3168347 overlaps from './bucket0048/slice0038'.
Loading 2974655 overlaps from './bucket0051/slice0038'.
Loading 1847514 overlaps from './bucket0053/slice0038'.
Loading 1320656 overlaps from './bucket0054/slice0038'.
Loading 3790374 overlaps from './bucket0056/slice0038'.
Loading 4158604 overlaps from './bucket0058/slice0038'.
Loading 4270118 overlaps from './bucket0060/slice0038'.
Loading 3932840 overlaps from './bucket0062/slice0038'.
Loading 4323002 overlaps from './bucket0064/slice0038'.
Loading 4264884 overlaps from './bucket0066/slice0038'.
Loading 2447468 overlaps from './bucket0067/slice0038'.
Loading 1757034 overlaps from './bucket0068/slice0038'.
Loading 4112867 overlaps from './bucket0069/slice0038'.
Loading 4968781 overlaps from './bucket0070/slice0038'.
Loading 5241208 overlaps from './bucket0071/slice0038'.
Loading 5101863 overlaps from './bucket0072/slice0038'.
Loading 4713074 overlaps from './bucket0073/slice0038'.
Loading 4740250 overlaps from './bucket0074/slice0038'.
Loading 5277432 overlaps from './bucket0075/slice0038'.
Loading 591954 overlaps from './bucket0076/slice0038'.
Sorting.
Writing output.
Writing 123162194 overlaps.
Created ovStore segment './0038' with 0 overlaps for reads from 4294967295 to 0.
Success.
more 3-index.err
Processing './index'
Now finished with fragments 1 to 67696 -- 123466811 overlaps.
Processing './0001.index'
Now finished with fragments 1 to 133414 -- 246935768 overlaps.
Processing './0002.index'
Now finished with fragments 1 to 196869 -- 370401635 overlaps.
Processing './0003.index'
Now finished with fragments 1 to 260451 -- 493878946 overlaps.
Processing './0004.index'
Now finished with fragments 1 to 330782 -- 617344112 overlaps.
Processing './0005.index'
Now finished with fragments 1 to 396970 -- 740820634 overlaps.
Processing './0006.index'
Now finished with fragments 1 to 458651 -- 864284310 overlaps.
Processing './0007.index'
Now finished with fragments 1 to 522939 -- 987758875 overlaps.
Processing './0008.index'
Now finished with fragments 1 to 598646 -- 1111227410 overlaps.
Processing './0009.index'
Now finished with fragments 1 to 677591 -- 1234691607 overlaps.
Processing './0010.index'
Now finished with fragments 1 to 756620 -- 1358165716 overlaps.
Processing './0011.index'
Now finished with fragments 1 to 847148 -- 1481628385 overlaps.
Processing './0012.index'
Now finished with fragments 1 to 937771 -- 1605092755 overlaps.
Processing './0013.index'
Now finished with fragments 1 to 1030865 -- 1728556115 overlaps.
Processing './0014.index'
Adding empty records for fragments 1030866 to 1030866
Now finished with fragments 1 to 1114823 -- 1852028753 overlaps.
Processing './0015.index'
Now finished with fragments 1 to 1200380 -- 1975492527 overlaps.
Processing './0016.index'
Now finished with fragments 1 to 1305443 -- 2098957620 overlaps.
Processing './0017.index'
Adding empty records for fragments 1305444 to 1305444
Now finished with fragments 1 to 1408924 -- 2222423818 overlaps.
Processing './0018.index'
Now finished with fragments 1 to 1497878 -- 2345900175 overlaps.
Processing './0019.index'
Now finished with fragments 1 to 1575304 -- 2469370272 overlaps.
Processing './0020.index'
Now finished with fragments 1 to 1648482 -- 2592832864 overlaps.
Processing './0021.index'
Now finished with fragments 1 to 1720847 -- 2716314934 overlaps.
Processing './0022.index'
Now finished with fragments 1 to 1799472 -- 2839782547 overlaps.
Processing './0023.index'
Now finished with fragments 1 to 1872373 -- 2963264380 overlaps.
Processing './0024.index'
Now finished with fragments 1 to 1942609 -- 3086747733 overlaps.
Processing './0025.index'
Now finished with fragments 1 to 2017641 -- 3210221197 overlaps.
Processing './0026.index'
Now finished with fragments 1 to 2092437 -- 3333685594 overlaps.
Processing './0027.index'
Now finished with fragments 1 to 2171427 -- 3457155061 overlaps.
Processing './0028.index'
Now finished with fragments 1 to 2238045 -- 3580655075 overlaps.
Processing './0029.index'
Now finished with fragments 1 to 2298075 -- 3704126650 overlaps.
Processing './0030.index'
Now finished with fragments 1 to 2358130 -- 3827598424 overlaps.
Processing './0031.index'
Now finished with fragments 1 to 2419099 -- 3951073419 overlaps.
Processing './0032.index'
Now finished with fragments 1 to 2477702 -- 4074537131 overlaps.
Processing './0033.index'
Now finished with fragments 1 to 2545188 -- 4198019102 overlaps.
Processing './0034.index'
Now finished with fragments 1 to 2608261 -- 4321490855 overlaps.
Processing './0035.index'
Now finished with fragments 1 to 2675044 -- 4444953536 overlaps.
Processing './0036.index'
Now finished with fragments 1 to 2742123 -- 4568416206 overlaps.
Processing './0037.index'
Now finished with fragments 1 to 2811531 -- 4691578400 overlaps.
Created ovStore '.' with 0 overlaps for reads from 4294967295 to 0.
Removing intermediate files.
Finished.
Success.
more config.err
Changed max processes per user from 1024 to 16546621 (max 16546621).
Max open files limited to 4096, no increase possible.
Found 4691578400 (4691.58 million) overlaps.
Configuring for 4.00 GB to 16.00 GB memory and 4080 open files.
Will sort using 38 files; 125829120 (125.83 million) overlaps per bucket; 4.00 GB memory per bucket
bucket 1 has 123466811 olaps.
bucket 2 has 123468957 olaps.
bucket 3 has 123465867 olaps.
bucket 4 has 123477311 olaps.
bucket 5 has 123465166 olaps.
bucket 6 has 123476522 olaps.
bucket 7 has 123463676 olaps.
bucket 8 has 123474565 olaps.
bucket 9 has 123468535 olaps.
bucket 10 has 123464197 olaps.
bucket 11 has 123474109 olaps.
bucket 12 has 123462669 olaps.
bucket 13 has 123464370 olaps.
bucket 14 has 123463360 olaps.
bucket 15 has 123472638 olaps.
bucket 16 has 123463774 olaps.
bucket 17 has 123465093 olaps.
bucket 18 has 123466198 olaps.
bucket 19 has 123476357 olaps.
bucket 20 has 123470097 olaps.
bucket 21 has 123462592 olaps.
bucket 22 has 123482070 olaps.
bucket 23 has 123467613 olaps.
bucket 24 has 123481833 olaps.
bucket 25 has 123483353 olaps.
bucket 26 has 123473464 olaps.
bucket 27 has 123464397 olaps.
bucket 28 has 123469467 olaps.
bucket 29 has 123500014 olaps.
bucket 30 has 123471575 olaps.
bucket 31 has 123471774 olaps.
bucket 32 has 123474995 olaps.
bucket 33 has 123463712 olaps.
bucket 34 has 123481971 olaps.
bucket 35 has 123471753 olaps.
bucket 36 has 123462681 olaps.
bucket 37 has 123462670 olaps.
bucket 38 has 123162194 olaps.
Will sort 123.463 million overlaps per bucket, using 38 buckets 3.93 GB per bucket.
- Saved configuration to './red_alder.ovlStore.BUILDING/config'.
We have 80 threads and 2 TB of memory total.
Let me know if these are the right logs?
Would this be the proper command? canu -p RA -d RA-auto genomeSize=500m -pacbio-raw All_pacbio_data.fastq useGrid=false ovsMethod=sequential
Let me know when you can?
Cheers Rick
Those are the right logs. Everything looks file.
1) Try just restarting canu (without ovsMethod=sequential). It should notice the work is done and continue.
2) If that fails, you might as well go ahead with the ovsMethod=sequential. The command looks correct.
For both, also give it saveOverlaps=true. This will save intermediate files, which could save you from recomputing overlaps if something goes horribly wrong.
-- Finished on Mon Jun 12 22:07:54 2017 (176 seconds) with 503829.656 GB free disk space
----------------------------------------
-- Overlap store sorter finished.
-- Finished stage 'cor-overlapStoreSorterCheck', reset canuIteration.
----------------------------------------
-- Starting command on Mon Jun 12 22:07:54 2017 with 503829.656 GB free disk space
cd correction/red_alder.ovlStore.BUILDING
./scripts/3-index.sh \
> ./logs/3-index.err 2>&1
-- Finished on Mon Jun 12 22:08:01 2017 (7 seconds) with 503833.804 GB free disk space
----------------------------------------
--
-- Overlap store 'correction/red_alder.ovlStore' successfully constructed.
--
-- Purged 117.777 GB in 182 overlap output files and 2 directories.
-- Overlap store 'correction/ra.ovlStore' statistics not available (skipped in correction and trimming stages).
-- Finished stage 'cor-createOverlapStore', reset canuIteration.
-- Set corMinCoverage=4 based on read coverage of 42.
-- Computing global filter scores 'correction/2-correction/ra.globalScores'.
STOPPED then I tried re-starting twice
errors after re-starting:
-- BEGIN CORRECTION
--
-- Set corMinCoverage=4 based on read coverage of 42.
-- Computing global filter scores 'correction/2-correction/ra.globalScores'.
----------------------------------------
-- Starting command on Mon Jun 12 22:16:00 2017 with 503992.637 GB free disk space
cd correction/2-correction
/people/canu/Linux-amd64/bin/filterCorrectionOverlaps \
-G ../ra.gkpStore \
-O ../raovlStore \
-S ./ra.globalScores.WORKING \
-c 40 \
-l 0 \
> ./ra.globalScores.err 2>&1
sh: line 6: 27314 Aborted /people/canu/Linux-amd64/bin/filterCorrectionOverlaps -G ../red_alder.gkpStore -O ../ra.ovlStore -S ./ra.globalSco
res.WORKING -c 40 -l 0 > ./ra.globalScores.err 2>&1
-- Finished on Mon Jun 12 22:22:08 2017 (368 seconds) with 503994.806 GB free disk space
----------------------------------------
ERROR:
ERROR: Failed with exit code 134. (rc=34304)
ERROR:
ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
ABORT: failed to globally filter overlaps for correction.
ABORT:
ABORT: Disk space available: 503994.806 GB
ABORT:
ABORT: Last 50 lines of the relevant log file (correction/2-correction/ra.globalScores.err):
ABORT:
ABORT:
-- BEGIN CORRECTION
--
-- Set corMinCoverage=4 based on read coverage of 42.
-- Computing global filter scores 'correction/2-correction/red_alder.globalScores'.
----------------------------------------
-- Starting command on Mon Jun 12 22:23:59 2017 with 503992.989 GB free disk space
cd correction/2-correction
/people/canu/Linux-amd64/bin/filterCorrectionOverlaps \
-G ../ra.gkpStore \
-O ../rar.ovlStore \
-S ./ra.globalScores.WORKING \
-c 40 \
-l 0 \
> ./ra_globalScores.err 2>&1
sh: line 6: 28773 Aborted /people/canu/Linux-amd64/bin/filterCorrectionOverlaps -G ../red_alder.gkpStore -O ../raovlStore -S ./red_alder.globalSco
res.WORKING -c 40 -l 0 > ./ra.globalScores.err 2>&1
-- Finished on Mon Jun 12 22:30:10 2017 (371 seconds) with 503986.441 GB free disk space
----------------------------------------
ERROR:
ERROR: Failed with exit code 134. (rc=34304)
ERROR:
ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
ABORT: failed to globally filter overlaps for correction.
ABORT:
ABORT: Disk space available: 503986.441 GB
ABORT:
ABORT: Last 50 lines of the relevant log file (correction/2-correction/ra.globalScores.err):
ABORT:
ABORT:
[whit040
I'm going to guess the store is corrupt because of the initial stop (which didn't report any error so it's not clear why that was interrupted). What does the correction/2-correction/ra.globalScores.err
log report?
more ra.globalScores.err filterCorrectionOverlaps: stores/ovStoreFile.C:342: bool ovFile::readOverlap(ovOverlap*): Assertion `_bufferPos <= _bufferLen' failed.
The other two times I have run it that file is empty.
This definitely looks like some kind of corruption in the store. My guess is the initial stop was because the parallel building overloaded your disk and caused an I/O error.
I would remove the asm.ovlStore folder and 2-correction and run with ovsMethod=sequential saveOverlaps=true
. As long as you still have output in correction/1-overlapper/results
it shouldn't have to recompute any overlaps just re-build the store using a single process this time.
I tested a new command from the beginning. I got pretty far just failed at the last step.
module load java/1.8.0_31
canu -p ra_error -d ra_error_dg genomeSize=500m -pacbio-raw All_pacbio_data.fastq useGrid=false corMaxEvidenceErate=0.15
It failed at the final assembly step.
-- Finished on Tue Jun 13 20:35:03 2017 (2 seconds) with 503658.8 GB free disk space
----------------------------------------
-- Purging consensus output after loading to ctgStore and/or utgStore.
----------------------------------------
-- Starting command on Tue Jun 13 20:35:03 2017 with 503658.8 GB free disk space
cd unitigging
/people/whit040/canu/Linux-amd64/bin/tgStoreDump \
-G ./ra_error.gkpStore \
-T ./ra_error.ctgStore 2 \
-sizes -s 500000000 \
> ./ra_error.ctgStore/seqDB.v002.sizes.txt
-- Finished on Tue Jun 13 20:35:15 2017 (12 seconds) with 503663.159 GB free disk space
----------------------------------------
-- Found, in version 2, after consensus generation:
-- contigs: 6762 sequences, total length 246709839 bp (including 424 repeats of total length 3892072 bp).
-- bubbles: 0 sequences, total length 0 bp.
-- unassembled: 329283 sequences, total length 1445213000 bp.
--
-- Contig sizes based on genome size --
-- NG (bp) LG (contigs) sum (bp)
-- ---------- ------------ ----------
-- 10 69433 541 50063460
-- 20 49091 1406 100009084
-- 30 36036 2600 150016641
-- 40 25889 4228 200007928
--
-- Finished stage 'consensusLoad', reset canuIteration.
----------------------------------------
-- Starting command on Tue Jun 13 20:35:15 2017 with 503663.159 GB free disk space
cd unitigging
/people/whit040/canu/Linux-amd64/bin/tgStoreCoverageStat \
-G ./ra_error.gkpStore \
-T ./ra_error.ctgStore 2 \
-s 500000000 \
-o ./ra_error.ctgStore.coverageStat \
> ./ra_error.ctgStore.coverageStat.err 2>&1
-- Finished on Tue Jun 13 20:35:19 2017 (4 seconds) with 503662.455 GB free disk space
----------------------------------------
-- Finished stage 'consensusAnalyze', reset canuIteration.
--
-- Running jobs. First attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Tue Jun 13 20:35:19 2017 with 503662.455 GB free disk space (1 processes; 5 concurrently)
cd unitigging/4-unitigger
./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1
-- Finished on Tue Jun 13 20:37:34 2017 (135 seconds) with 503665.004 GB free disk space
----------------------------------------
--
-- Running jobs. Second attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Tue Jun 13 20:37:34 2017 with 503665.004 GB free disk space (1 processes; 5 concurrently)
cd unitigging/4-unitigger
./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1
-- Finished on Tue Jun 13 20:38:29 2017 (55 seconds) with 503664.327 GB free disk space
----------------------------------------
--
-- GFA alignment failed.
--
ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
ABORT: canu iteration count too high, stopping pipeline (most likely a problem in the grid-based computes).
ABORT:
Any thoughts? I have it ready for re-start.
You probably hit the same bug as #527, update to the latest code and re-start, Canu should run to completion. I will note you don't necessarily need the alignGFA step unless you're planning to work with the Canu graph outputs. You can get the assembled contigs directly using the tgStoreDump command: http://canu.readthedocs.io/en/latest/commands/tgStoreDump.html
Tried to restart same thing:
-- genomeSize 500000000
--
-- Overlap Generation Limits:
-- corOvlErrorRate 0.2400 ( 24.00%)
-- obtOvlErrorRate 0.0450 ( 4.50%)
-- utgOvlErrorRate 0.0450 ( 4.50%)
--
-- Overlap Processing Limits:
-- corErrorRate 0.3000 ( 30.00%)
-- obtErrorRate 0.0450 ( 4.50%)
-- utgErrorRate 0.0450 ( 4.50%)
-- cnsErrorRate 0.0750 ( 7.50%)
--
--
-- BEGIN ASSEMBLY
--
--
-- Running jobs. First attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Wed Jun 14 10:20:24 2017 with 504166.109 GB free disk space (1 processes; 5 concurrently)
cd unitigging/4-unitigger
./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1
-- Finished on Wed Jun 14 10:21:24 2017 (60 seconds) with 504172.133 GB free disk space
----------------------------------------
--
-- Running jobs. Second attempt out of 2.
----------------------------------------
-- Starting 'gfa' concurrent execution on Wed Jun 14 10:21:24 2017 with 504172.133 GB free disk space (1 processes; 5 concurrently)
cd unitigging/4-unitigger
./alignGFA.sh 1 > ./alignGFA.000001.out 2>&1
-- Finished on Wed Jun 14 10:22:19 2017 (55 seconds) with 504173.107 GB free disk space
----------------------------------------
--
-- GFA alignment failed.
--
ABORT:
ABORT: Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
ABORT: canu iteration count too high, stopping pipeline (most likely a problem in the grid-based computes).
ABORT:
This is the version I am using:
contigFilter 2 1000 0.75 0.75 2 num 5
-- Canu snapshot v1.5 +54 changes (r8254 f356c2c3f2eb37b53c4e7bf11e927e3fdff4d747)
Is this the correct version?
I am using canu-1.5.Linux-amd64.tar.xz
?
I would like to use this command
usage: tgStoreDump -G
Could you provide an example command? That would be great.
THANK YOU!
Nope, that's not the latest. See 'Installing' at http://canu.readthedocs.io/en/latest/ that uses 'git clone'.
If you don't need the gfa results, create four empty files for the gfa outputs.
touch unitigging/4-unitigger/ra_error.unitigs.aligned.gfa
touch unitigging/4-unitigger/ra_error.contigs.aligned.gfa
touch unitigging/4-unitigger/ra_error.unitigs.aligned.bed
touch unitigging/4-unitigger/ra_error.unitigs.aligned.bed.gfa
then restart The file names are listed at the end of unitigging/4-unitigger/alignGFA.sh. If those four files exist, regardless of size, the alignGFA step will be skipped and outputs for the assembly generated.
'tigStore' is the generic name for the ctgStore (contigs) and utgStore (unitigs). That command does a lot, so I can't really give a complete example, but
tgStoreDump -G *gkpStore -T *ctgStore 2 -consensus
will dump consensus sequences for contigs.
What are the gfa results?
I ran the command in the unitigging folder: tgStoreDump -G gkpStore -T ctgStore 2 -consensus >contigs_out.fasta
?
These aren't contigs? What are they? sorry.
GFA is the graphical output from the assembler (https://github.com/GFA-spec/GFA-spec).
The dump command will output contigs/unitigs/and unassembled reads. So yes, those are contigs along with other assembly outputs. You can see the type of output from the header:
tig00000001 len=1358507 reads=4870 covStat=5495.26 gappedBases=no class=contig suggestRepeat=no suggestCircular=no
tig00094448 len=3278 reads=1 covStat=0.00 gappedBases=no class=unassm suggestRepeat=no suggestCircular=no
To get just contigs, add the -contigs option to your command.
Thank you both so much! And, being patient with me - much appreciated.
I did get some contig outputs using the tgstoredump. I have recently re-installed and I am running again to see if will finish to completion.
I will let you know.
Is it possible to add in illumina contigs in a hybrid fashion? Or reads?
It completed successfully! Thank you.
The .bed file was empty.
The hybrid with Illumina any thoughts?
There's no support for Illumina data in Canu so you could try to merge an Illumina assembly and a PacBio assembly using a third-party tool but that isn't likely to improve your result. You could use the Illumina data to polish the final consensus which is what we normally do.
What's your favorite polishing program?
Cheers and many thanks Rick
On Wed, Jun 14, 2017 at 1:15 PM, Sergey Koren notifications@github.com wrote:
There's no support for Illumina data in Canu so you could try to merge an Illumina assembly and a PacBio assembly but that isn't likely to improve your result. You could use the Illumina data to polish the final consensus which is what we normally do.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/marbl/canu/issues/525#issuecomment-308545328, or mute the thread https://github.com/notifications/unsubscribe-auth/ALWiqEJQNL7boYJDsYRTmBZDj84J3v7Hks5sED9LgaJpZM4N3r-n .
Since you have PacBio data, Quiver/Arrow should be first. For Illumina data, we typically use Pilon.
Since the asm finished I'm closing this original issue. If you have another issue or an error with another run please open a new issue.
Hello,
I am using the current linux-amd64. Canu keeps failing at the filterCorrectionOverlaps with no error output. Or it just stops... I have a large memory box of 2TB.
commands I have tired:
It's 500 Mbp genome at 41 GB of raw pacbio data from RSII technology
Its a plant genome with polyploid but I can send more error reports. I really need to get this assembled asap! HELP....
Using java 1.8.0_31
Outputs:
command one
Command two error
STOPS
I have no problem with my bacterial genomes. Please let me know what else you need to figure this out asap. Help!