Open brettChapman opened 1 year ago
I saw this in the wiki: https://github.com/ComparativeGenomicsToolkit/cactus/blob/master/doc/progressive.md#running-on-a-cluster
Adding --consMemory 126G
didn't work, I get an error saying --consMemory
not recognised.
Disregard, I've realised --consMemory
is for cactus-align only. Will try again and update.
Yeah, the --consMemory
option will fix it.
The issue was that cactus often used request less memory from Toil jobs than it actually used. This was fine most of the time, though it could certainly result in crashes when you ran out of memory.
But for slurm (at least on our cluster), going over the requested memory is instant eviction. This meant that I had to go into each job and make its memory estimate much more conservative. For jobs that don't use much memory, or that are very simple functions of the input size, it wasn't a big deal. But cactus_consolidated is really hard to predict, and for now it errs on the side of being too conservative. I do hope to improve it going forward.
On a semi-related note, you can add memory usage to your cactus logs by setting export CACTUS_LOG_MEMORY=1
Hi
I've recently updated to the latest version (v2.6.5) so I could use ODGI and the inbuilt visualisations, and I'm now getting errors I didn't before, complaining about system memory. I could complete the job before when using v2.5.4.
Thanks.