Closed achurcher closed 7 years ago
The good news is that this will be fixed in the next version.
For now, you can try adding '-raw' to the ovStoreBucketizer command in warb_assm.ovlStore.BUILDING/scripts/1-bucketize.sh. This should disable gzip compression. I say try because I don't know for sure if that option existed back in the 1.3 version.
Canu might also rewrite the 1-bucketize.sh file on a restart (I don't think it does). If it does, you can change src/pipelines/OverlapStore.pm at around line 258 to add the option there.
The final option is to use the slower non-parallel version of this component: ovsMethod=sequential.
How big is the correction/1-overlapper/results/ directory? Your free disk space makes me a little nervous.
Hi Brian
Thanks for your help! I am running the assembly on a relatively large cluster and unfortunately lost the intermediate files/folders from the scratch dir on the node when the job failed the last time. So, I am not sure about the size of the correction/1-overlapper/results directory.
I have restarted the run using the 'ovsMethod=sequential' option as you suggested and this stage appears to have completed successfully and in a very reasonable amount of time.
Thanks again for the help : )
Allison
Hi I am having what appears to be the same problem as in this post: https://github.com/marbl/canu/issues/136. We are trying to assemble a 1.2 GB genome with ~66x PacBio coverage and are using Canu version 1.3. We are running on a 512GB machine with 16 cores.
I have tried increasing the ovsMemory from 5-500 (ovsMemory=5-500) but the run still fails at the same place. I have pasted the tail end of the log file below and would be very happy to hear if you have any suggestions.
Thank you, Allison