snakemake / snakemake

This is the development home of the workflow management system Snakemake. For general information, see
https://snakemake.github.io
MIT License
2.3k stars 557 forks source link

`--cluster-status` not compatible with `--slurm`? #2477

Closed AlcaArctica closed 6 months ago

AlcaArctica commented 1 year ago

I have a snakemake pipeline which I run with the --slurm flag. Everything is working fine. However, I was unable to include a cluster status script with --cluster-status. Alhough the pipeline does not produce any errors when using --cluster-status, the cluster status script does not produce any output. However, the same script IS functional when I run the pipeline without the --slurm flag. Unfortunately, this is not an option, as then I get the "some output files are missing" for some rules. This never happens, when including the --slurm flag.

So here is the short form:

snakemake -s ccs_new.smk --profile simple/ --cluster-status ./extras/status-sacct.py Pro: status script works Contra: MissingOutputException error screws up the whole pipeline

snakemake -s ccs_new.smk --profile simple/ --cluster-status ./extras/status-sacct.py --slurm Pro: pipelines finishes without error Contra: status script does not produce any output

What is going on?

For more information, here is my profile config:

# from https://github.com/jdblischak/smk-simple-slurm
cluster:
  mkdir -p logs &&
  sbatch
    --partition={resources.partition}
    --cpus-per-task={threads}
    --mem={resources.mem_mb}
    --time={resources.time}
    --job-name={rule}
    --output=logs/{rule}.out
    --error=logs/{rule}.err
    --parsable 

default-resources:
  - partition=batch 
  - mem_mb=1000 
  - time="01:00:00" 
  - nodes=1 

restart-times: 0
max-jobs-per-second: 10 
max-status-checks-per-second: 1 
local-cores: 1
latency-wait: 60
jobs: 400 
keep-going: True
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True

I guess , I would also appreciate an explanation of what exactly the --slurm flag does? Because in both cases (also without the flag), jobs are submitted to the cluster. I just started using snakemake in combination with slurm, so perhaps I have a misunderstanding here.

cmeesters commented 7 months ago

Sorry, I am just screening older issues ...

When using the slurm-executor, which is available in bioconda, there is no need to specify --cluster-status at all.