snakemake / snakemake

This is the development home of the workflow management system Snakemake. For general information, see
https://snakemake.github.io
MIT License
2.25k stars 549 forks source link

Snakemake 8.9.0 on a singularity slurm cluster combination error #2808

Closed tinu-t closed 4 months ago

tinu-t commented 6 months ago

I am trying to run snakemake 8.9.0 using singularity on a slurm cluster, Constantly getting this error, not sure what is wrong with the syntax

Command:

snakemake --cores 3 --configfile config.yaml --executor cluster-generic --cluster-generic-submit-cmd 'sbatch -t {params.res_time} --cpus-per-task {threads} --mem-per-cpu {params.res_mem} -o {params.lsf_log}' -j 20 -k -p --notemp --latency-wait 24 --rerun-incomplete --use-singularity --singularity-args '--bind /cluster/snakemake
-path'

Error: __main__.py: error: argument --apptainer-args/--singularity-args: expected one argument

tinu-t commented 5 months ago

The apptainer parameters had to be written in a YAML file and given as argument to the -profile argument like shown below:

snakemake --cores 3 --configfile config.yaml --executor cluster-generic --cluster-generic-submit-cmd 'sbatch -t {params.res_time} --cpus-per-task {threads} --mem-per-cpu {params.res_mem} -o {params.lsf_log}' -j 20 -k -p --notemp --latency-wait 24 --rerun-incomplete --profile ./profile/apptainer

cat ./profile/apptainer/config.v8+.yaml

use-singularity: True
singularity-args: "\"--bind /cluster/snakemake/\""

This worked for me

amadeovezz commented 5 months ago

I am also running into this error while trying to use --singularity-args with slurm:

__main__.py: error: argument --apptainer-args/--singularity-args: expected one argument

The above solution did not work and I've also tried passing it through the command line: --use-singularity --singularity-args "--bind /some/path"

I'm not sure if this is an issue with the snakemake-executor-plugin-slurm or with snakemake itself. I noticed this regression after I upgraded from 8.4.2.

Additional details:

Config

executor: slurm

# Slurm specific
default-resources:
  slurm_account: "some_account"
  slurm_partition: "some_partition,common"
  runtime: 60 
  mem_mb: 4000
  nodes: 1
 # Non-standard resource specifications
  slurm_extra: "'--exclude=some_node'"

jobs: 1 
printshellcmds: True
restart-times: 0 

use-singularity: True
singularity-args: "--bind some/path"

Minimal snakefile

rule all:
    output:
        "test_output.txt"
    singularity:
        "/path/to/v4/RSingleCell.sif" 
    shell:
        "ls /mnt > {output}"

Command

snakemake --profile ./profiles/generic/

My current versions:

snakemake                 8.10.7
snakemake-executor-plugin-slurm 0.4.4             
snakemake-executor-plugin-slurm-jobstep 0.2.1              
snakemake-interface-common 1.17.2             
snakemake-interface-executor-plugins 9.1.1             
snakemake-interface-report-plugins 1.0.0             
snakemake-interface-storage-plugins 3.2.0             

Any help is appreciated 🙏

weber8thomas commented 5 months ago

Same issue using that command on snakemake 8.10.7:

snakemake  --sdm conda apptainer --conda-frontend mamba --forceall -j1 --executor slurm --set-resources preprocess_data:constraint="rome" --apptainer-args "--bind /g:/g"
mhjiang97 commented 5 months ago

Same issue to me using snakemake version 8.10.7 and --executor cluster-generic

KatharinaHoff commented 5 months ago

Same issue with snakemake version 8.11.0 and --executor slurm. However, the solution suggested by @tinu-t worked for me. It would still be great if this could be fixed because it's a bummer when sharing workflows with other users who need different bindings.

Benkendorfer commented 4 months ago

I'm also having this problem in version 8.11.6 but the solution suggested by @tinu-t does not solve the problem. When I pass

singularity-args: "\"--bind /sdf\""

in my profile I get a crash with error

SLURM job submission failed. The error message was sbatch: error: Script arguments not permitted with --wrap option
KatharinaHoff commented 4 months ago

Thank you @johanneskoester !