Closed mrvollger closed 8 months ago
We might not have it listed in the docs, I'll make a note to check for that in the future.
In terms of usage, I typically allocated 4GB per thread. For example, all of the benchmarking I have set up reserves 16 threads and 64GB memory. I also believe these resources requirements are the same in the human WGS workflow: https://github.com/PacificBiosciences/wdl-common/blob/fef058b879d04c15c3da2626b320afdd8ace6c2e/wdl/workflows/hiphase/hiphase.wdl#L118.
With that said, if you have an unusual sample, it's possible that you may need more memory. "Unusual" in this context usually means it has a much higher NG50 than we expect in a normal sample. Some examples where we've seen this issue pop up:
Does the dataset you're using fall into one of the above categories? If not, is there anything else unusual about the dataset that you think could be influencing it? If the data is publicly available, or you're willing to share, I can also take a look to see if there's anything odd.
This is very useful information, thanks! I don't think our data falls into one of those categories, but I also think we have been requesting a little less memory than that on average.
Unfortunately, it is protected data, so I cannot share, but if I run into it with open data I will be sure to share.
Please feel free to close, and I will come back if I can find some open data that gives me trouble.
Sure thing! FWIW, there are a couple places we could likely tighten up memory, it just hasn't been a priority given the current state. If you're finding that memory consumption is burdensome, we could look into re-prioritizing that.
Hi @holtjma,
I was wondering if you could share some approximate memory usage estimates for hiphase? (and sorry if I missed it in the docs)
We're running into some surprising out-of-mem errors and wondering if we need to change resources or if there is something that could be addressed/improved.
Thanks, Mitchell