Closed mbhall88 closed 3 years ago
Hi @mbhall88! Thank you very much for the feedback, I hadn't heard of nanoq. I'll see about get these swapped out and getting an update pushed in the next day or two.
Please don't hesitate to provide any other suggestions for improvements. Thanks again!
Replaced filtlong with nanoq https://github.com/rpetit3/dragonflye/commit/5c5d2ac1cd2663eb336b2f6e94b114a8ace07a51 It'll be in the next version (v1.0.1)
Thanks again for the feedback, please feel free to reopen.
[dragonflye] Filter reads based on length and quality
[dragonflye] Running: nanoq --min_length 10000 --fastx READS.sub.fq.gz --detail 2>&1 1> READS.fq | sed 's/^/[nanoq] /' | tee -a dragonflye.log
[nanoq]
[nanoq] Nanoq Read Summary
[nanoq] ====================
[nanoq]
[nanoq] Number of reads: 9,545
[nanoq] Number of bases: 186,524,370
[nanoq] N50 read length: 20,856
[nanoq] Longest read: 99,830
[nanoq] Shortest read: 10,001
[nanoq] Mean read length: 19,541
[nanoq] Median read length: 17,507
[nanoq] Mean read quality: 13.54
[nanoq] Median read quality: 13.77
[nanoq]
Sorry, and one other question/comment. Would it not make more sense to filter the reads before subsampling? Otherwise, if you subsample to say 60x and half the reads are less than the minimum read length, you end up with a filtered fastq that is much less than 60x?
Yeah, totally agree. I think it might have been to reduce the filtlong run time, but with nanoq that won't be an issue.
I'll get the switched, and haven't in next version update
Thanks again for the great feedback!
Hi @mbhall88
I flipped the two methods, so as of v1.0.4 (https://github.com/rpetit3/dragonflye/releases/tag/v1.0.4) read length filtering is done before read depth reduction.
Thanks again for the feedback!
Hey @rpetit3 great work on this.
A suggestion that should speed things up slightly would be to switch the read length filtering from filtlong to nanoq. There is a nice benchmark on the nanoq README to back this up.