PacificBiosciences / HiPhase

Small variant, structural variant, and short tandem repeat phasing tool for PacBio HiFi reads
Other
71 stars 4 forks source link

Expected memory usage #29

Closed mrvollger closed 8 months ago

mrvollger commented 8 months ago

Hi @holtjma,

I was wondering if you could share some approximate memory usage estimates for hiphase? (and sorry if I missed it in the docs)

We're running into some surprising out-of-mem errors and wondering if we need to change resources or if there is something that could be addressed/improved.

Thanks, Mitchell

holtjma commented 8 months ago

We might not have it listed in the docs, I'll make a note to check for that in the future.

In terms of usage, I typically allocated 4GB per thread. For example, all of the benchmarking I have set up reserves 16 threads and 64GB memory. I also believe these resources requirements are the same in the human WGS workflow: https://github.com/PacificBiosciences/wdl-common/blob/fef058b879d04c15c3da2626b320afdd8ace6c2e/wdl/workflows/hiphase/hiphase.wdl#L118.

With that said, if you have an unusual sample, it's possible that you may need more memory. "Unusual" in this context usually means it has a much higher NG50 than we expect in a normal sample. Some examples where we've seen this issue pop up:

  1. Extremely high heterozygosity - I think we had a sample semi-recently had a much higher het/hom ratio than expected, leading to significantly longer phase blocks (NG50 was ~6MB in the end). IIRC, we had to bump the memory req. to get that sample through.
  2. Non-HiFi data - One example where someone was trying to run HiPhase with ONT datasets. Disregarding accuracy concerns, this will create larger putative phase blocks leading to higher peak memory usage per phase block.
  3. Usage error - For example, running global re-alignment without SVs will almost certainly be problematic.

Does the dataset you're using fall into one of the above categories? If not, is there anything else unusual about the dataset that you think could be influencing it? If the data is publicly available, or you're willing to share, I can also take a look to see if there's anything odd.

mrvollger commented 8 months ago

This is very useful information, thanks! I don't think our data falls into one of those categories, but I also think we have been requesting a little less memory than that on average.

Unfortunately, it is protected data, so I cannot share, but if I run into it with open data I will be sure to share.

Please feel free to close, and I will come back if I can find some open data that gives me trouble.

holtjma commented 8 months ago

Sure thing! FWIW, there are a couple places we could likely tighten up memory, it just hasn't been a priority given the current state. If you're finding that memory consumption is burdensome, we could look into re-prioritizing that.