Open holtgrewe opened 2 years ago
This also leads to mem_per_thread
not being settable.
sbatch: fatal: --mem, --mem-per-cpu, and --mem-per-gpu are mutually exclusive.
Traceback (most recent call last):
File "/etc/xdg/snakemake/cubi-v1/slurm-submit.py", line 82, in <module>
jobid = slurm_utils.submit_job(jobscript, **sbatch_options)
File "/etc/xdg/snakemake/cubi-v1/slurm_utils.py", line 182, in submit_job
raise e
File "/etc/xdg/snakemake/cubi-v1/slurm_utils.py", line 180, in submit_job
res = sp.check_output(cmd)
File "/data/gpfs-1/users/euskircp_c/work/miniconda/lib/python3.7/subprocess.py", line 411, in check_output
**kwargs).stdout
File "/data/gpfs-1/users/euskircp_c/work/miniconda/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['sbatch', '--parsable', '--cluster=cubi', '--time=02:00:00', '--mem=5532', '--mem-per-cpu=6G', '--cpus-per-task=12',...
I can reproduce this using snakemake>=7.0. The proper place to apply a patch I would guess is here
My solution to this was to change the order of priority here and therefore run convert_job_properties()
before applying any defaults from my cluster_config.json
. When using the HPC profile I don't really want the resources from the Snakefile anyway since they would be tuned for local execution and not for SLURM, but I understand the order is based on preference.
Edit for clarity: I mean I don't want any resources from the Snakefile to take priority over those in __defaults__
if there is a name clash.
I am also encountering inconsistent resource mapping in combination with Snakemake 7.6.2.
Setting
resources:
mem_mb = 30000
Works with scontrol show jobid
indicating 30000M memory.
But
mem = 30000
or
mem = "30G"
Only provide the default 1000M.
Apparently, Snakemake >=5.6.0 defines
by default, cf.
At least the standard value for
mem_mb
has a higher precedence when using the snakemake profile thanmem
.E.g.,
Will lead to using the
mem_mb=1000
from the defaults.