benedictpaten / marginAlign

UCSC Nanopore
MIT License
42 stars 13 forks source link

marginStats is slow #16

Closed benedictpaten closed 7 years ago

benedictpaten commented 9 years ago

Performance to calculate runtime is way too slow.

JohnUrban commented 7 years ago

I am experiencing the slowness at the moment. Do you know how long it should take for a 1-2 million reads?

mitenjain commented 7 years ago

It could take a couple of hours, if the reads are long (~10 kb). I will fix a patch for this in a few days. Sorry for the hassle.

JohnUrban commented 7 years ago

Thank you. Actually it has been running for 16 hours already. Is there anyway to use multiple threads? I may need to devise a divide-and-conquer strategy.

JohnUrban commented 7 years ago

If I run it on 50-100 reads, it seems to suggest it will take 400 hours to finish 2 million reads. Does that sound correct?

benedictpaten commented 7 years ago

Are you doing both training and alignment on the reads?

Benedict (from phone, forgive extra typos)

On Nov 9, 2016 9:07 AM, "John Urban" notifications@github.com wrote:

If I run it on 50-100 reads, it seems to suggest it will take 400 hours to finish 2 million reads. Does that sound correct?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/benedictpaten/marginAlign/issues/16#issuecomment-259467988, or mute the thread https://github.com/notifications/unsubscribe-auth/AAkOIIrmM_NTzpA1VEFFEr0GQyIaRZZ9ks5q8f3BgaJpZM4EEC_3 .

JohnUrban commented 7 years ago

For the big job I am running (2 million reads), I am running marginStats (to get pct identity values) on a set of nanopore reads previously aligned by BWA (independent of marginAlign). These reads are mapped to an ~300 Mb assembly.

For the 50-100 reads I checked the speed with, it was on lambda nanopore reads previously aligned with marginAlign. Of course, these reads are mapped to the 48.5 kb lambda genome.

So to answer your question -- I am past alignment/training stages -- just trying to use marginStats.

mitenjain commented 7 years ago

400 hours seems odd, esp. if you are just running stats. I have done stats on ~125000 reads in around 5.5 hours (parallelizing on 32 cpu's).

There are ways we can speed this up further. I am out of town till tomorrow, but will look into this ASAP.

JohnUrban commented 7 years ago

But are you parallelizing by splitting up the SAM files or is there a parallelization option?

I do massively appreciate you looking into this and responding so quickly. Feel free to ignore this until you get back in town.

JohnUrban commented 7 years ago

Did you make any progress on this? Either way, can you let me know your strategy for parallelizing to 32 CPUs. Was it just breaking it up and executing into 32 separate BAM files? Is it a little more sophisticated than that? There is a potential learning moment to be had here.

mitenjain commented 7 years ago

Sorry, haven't gotten to it yet.

There are two ways to parallelize, one requires using the jobTree framework (this is being upgraded now) where you can specify cpu and memory. The simplest way to do it is as you mentioned, with multiple SAMs and running using parallel (I have done that too in the past).

I will confirm once I have looked at the stats code for speeding it up.

mitenjain commented 7 years ago

Did that help? I am going to push a possible fix later today/early tomorrow that should help with this. Apologies for the delay.

JohnUrban commented 7 years ago

No worries. I appreciate your help. Thanks for clarifying your parallelization strategy. I will break the BAM up into a ton of smaller pieces. Also, I will wait to try your possible fix. Thanks for looking into this.

danarte commented 7 years ago

Hello, is there an option for using multiple CPUs (threads) with marginStats ?

mitenjain commented 7 years ago

The easiest way to speed it up is to break the samfiles and run in parallel. We are working on a newer design of the code that will parallelize this. This should be available very soon.