Closed lsantuari closed 4 years ago
Thanks @lsantuari for reporting. I could confirm this issue on two HPC clusters (with Slurm 19.05.0
and 19.05.5
). In fact, Xenon is not tested with the latest Slurm (only up to v17).
$ xenon --version
Xenon CLI v3.0.4, Xenon library v3.0.4, Xenon cloud library v3.0.2
$ snakemake -C echo_run=0 mode=p enable_callers="['manta','delly','lumpy','gridss']" --use-conda --latency-wait 30 --jobs 14 \
--cluster 'xenon -vvv scheduler slurm --location local:// submit --name smk.{rule} --inherit-env --cores-per-task {threads} --max-run-time 5 --max-memory {resources.mem_mb} --working-directory . --stderr stderr-%j.log --stdout stdout-%j.log' &>smk.log&
$ cat smk.log
...
slurm adaptor: Got invalid key/value pair in output: Cgroup Support Configuration:
Error submitting jobscript (exit code 1):
For time being, you could use the sbatch
instead of xenon
command for snakemake --cluster
arg:
sbatch -J smk.{rule} -n {threads} --mem={resources.mem_mb} -t 5 -D . --error=stderr-%j.log --output=stdout-%j.log
Xenon CLI v3.0.5 fixes this issue. Include this fix in the next workflow release.
I am getting the following error:
slurm adaptor: Got invalid key/value pair in output: Cgroup Support Configuration:
when running sv-callers with slurm on the HPC, as reported here.