Closed huguesfontenelle closed 2 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
bump
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Potential solution would be to add an executor config option e.g. memPerCpu
(similar to the LSF-specific options) that controls whether the slurm executor uses mem
or mem-per-cpu
for the memory
directive. PRs are welcome.
This issue is also relevant for me. Following your suggestion @bentsherman and looking at the code for the Slurm executor, adding this option by adding a simple if-else clause depending on the a memPerCpu
flags seems straight-forward (started here
What else would be required for a PR? I guess adding a test to check if the directive succesfully translates to a sbatch
directive and some documentation.
Anyone aware of parallel movements on this topic? Otherwise I'll go ahead and open a PR.
Feel free to submit a PR with your contributions 😄
And yes, I would also like to see a unit test and documentation of the option. You also need to fetch the option from the executor config, see perJobMemLimit
in the LSF executor:
perJobMemLimit = session.getExecConfigProp(name, 'perJobMemLimit', perJobMemLimit)
Is there any movement on this?
New feature
The HPC that I am using has a SLURM workload manager that does not support Nextflow's memory directive. Instead, memory per CPU must be specified in its sbatch file:
As a workaround, I simply use clusterOptions in my process config:
But then that memory per cpu is used for all processes. I could copy it out of the global config, and write it in each each process, but then I must repeat the account name.
Usage scenario
For such SLURM systems, instead of using
together with
clusterOptions = '--account=hugues --mem-per-cpu=2000M'
I would haveFYI, documentation for my HPC: https://www.uio.no/english/services/it/research/sensitive-data/use-tsd/hpc/job-scripts.html