Open VGalata opened 2 years ago
The setup step worked with 12 cores and 48 Gb. Here is the slurm
output for the job:
Job ID: 2564274
Cluster: iris
User/Group: vgalata/clusterusers
State: TIMEOUT (exit code 0)
Nodes: 1
Cores per node: 12
CPU Utilized: 06:37:48
CPU Efficiency: 55.13% of 12:01:36 core-walltime
Job Wall-clock time: 01:00:08
Memory Utilized: 23.96 GB
Memory Efficiency: 49.92% of 48.00 GB
The actual runtime:
This Assembler process finished running at 2021-12-07 17:15:13 and took 3400 seconds to complete.
The log output was almost the same as before, here is the part which was not there in the previous run because of the crash:
Metadata will be extracted with 12 workers!
Concatenating files into /mnt/irisgpfs/users/vgalata/projects/imp3/submodules/mantis/References/NOG/NOGG/metadata.tsv
# Will now split data into chunks!
Checking which HMMs need to be split, this may take a while...
Will split: []
Database will be split with 0 workers!
Checking which custom hmms need to be pressed
HMMs will be pressed with 0 workers!
Preparing NLP Resources!
# Finished setting up databases!
# This Assembler process finished running at 2021-12-07 17:15:13 and took 3400 seconds to complete.
##########################################################################################################################
# Thank you for using Mantis, please make sure you cite the respective paper https://doi.org/10.1093/gigascience/giab042 #
##########################################################################################################################
Thank you @VGalata . I will look into this. Regards, Pedro
Mantis seems to spawn more processes than available cores and to run out of memory during the setup step. I ran it with 5 cores and 20 Gb, and during the metadata extraction with 5 works the job spawned even more sub-processes and crashed. The error message from the
slurm
job was "out of memory".I will try to run the job with 12 cores and 48 Gb to see whether the memory issue will appear again.
Used version: 14f75ac
CMD:
Config:
Conda YAML:
Log file
Screenshots: