Closed yt55 closed 2 months ago
It seems the number of consensus jobs is too large for your grid, what's in unitigging/5-consensus/consensus.jobSubmit-01.sh
? Can you also post the full genome_773m_hf.report
file to give more info on the assembly up to this point?
unitigging/5-consensus/consensus.jobSubmit-01.sh The contents are as follows
#!/bin/sh
sbatch \
--cpus-per-task=8 --mem-per-cpu=64m --partition=smp02 -o consensus.%A_%a.out \
-D `pwd` -J "cns_genome_773m_hf" \
-a 1-1000 \
`pwd`/consensus.sh 0 \
> ./consensus.jobSubmit-01.out 2>&1
I put the information of the whole genome_773m_hf.report
file in the attachment.
genome_773m_hf.report.txt
Looking forward to your answer.Thank you very much.
From the report, you've got essentially no assembly. The data is very short (all reads are 1-3kb), is that expected? Typically HiFi reads are 15+kb in length. The histogram also shows a higher than expected error rate (2-copy k-mers) and the peak is only 12-20x coverage. I wouldn't bother finishing this assembly as it will essentially give you back your reads. Given the low coverage and very short reads, I don't think hicanu will work on this sample very well. You could try the suggestions from the FAQ (trimming and/or increasing correctedErrorRate to 0.025 or similar) but I don't expect you'll get much of an assembly anyway.
Thank you very much for your answer!
Hello,
I tried running Canu using the command
on a slurm grid cluster, but got this "abnormal" error:
Please ask for help. Thank you very much.