Closed cmdoret closed 5 years ago
Hey,
Thanks again for the pull request!
I'm not super sure this is normal. However, I'm not the one who developed fpa. Have you tried to split your library into bigger chunks, just to see how it does impact the memory consumption?
I'll have a chat with the fpa developer and keep you updated.
Cheers, Pierre
Thanks, I only tried with 2 and 4 chunks. 2 was still too heavy (somewhere above 200GB). For the record, I have a small -ish genome (~50Mb) with high coverage (190x), perhaps this requires more RAM (more pairwise alignments) than a library of identical size on a larger genome ? In the end it worked out fine with 4 chunks, so no worries :)
Yeah, but doing so reduces the coverage that is actually used for correcting each read. And the deep the coverage, the better the correction... So this is actually kind of a big issue, thanks for pointing it out. :)
Just thinking about that too: did you modify the -I parameter of Minimap2? That's the parameter which allows to split the read index every N bases during the pairwise alignments. If you raised it a bit too much, maybe that could be the reason.
Nope, i kept the default 500M value. Thanks for the answer though !
Hey,
Not using fpa anymore, so this issue shoud no longer be one. Closing it.
Cheers, P
Hello,
I am using CONSENT correct on my nanopore reads. I have a library of 800k reads and the RAM consumption of the fpa index process gets extremely high when I run CONSENT-correct on it <350GB. However, if I split my library into 4 subsets, I get RAM usage below 20GB. Is this normal ?