Closed ap790 closed 15 hours ago
So the log reads:
mimalloc: error: unable to allocate OS memory (67108864 bytes, error code: 12 [Cannot allocate memory], address: (nil), large only: 0, allow large: 1)
mimalloc: process info:
elapsed: 77602.980 s
process: user: 76609.571 s, system: 708.610 s, faults: 388381
rss: current: 300456214528, peak: 268354465792
commit: current: 300456214528, peak: 300456214528
This means that your OS failed to fulfil SPAdes request for memory allocation. This is outside any SPAdes control and there is no workaround.
Practice shows that these large-memory HPC systems are very often misconfigured: they do not allow you to allocate the memory they have, effectively only some part is available for a single application.
You can check https://github.com/ablab/spades/issues/871 for some discussion of system parameters that you (or your system administrator) might want to tune.
Description of bug
I have been trying to run metaSPAdes on an HPC. Despite allocating 1.6 TB of memory and using the -m 1600 flag and -t 40, the SPAdes process consistently and automatically stops when memory usage approaches 250 GB. Continuing the script results in the same issue. I would appreciate any suggestions or solutions to resolve this problem.
spades.log
spades.log
params.txt
params.txt
SPAdes version
SPAdes 4.0.0
Operating System
macOS 14.5 (23F79)
Python Version
Python 3.9.18
Method of SPAdes installation
conda
No errors reported in spades.log