snakemake / snakemake-executor-plugin-slurm

A Snakemake executor plugin for submitting jobs to a SLURM cluster
MIT License
18 stars 19 forks source link

Slurm workslow submits intial jobs but then hangs until ctrl-c #157

Closed freekvh closed 2 weeks ago

freekvh commented 1 month ago

Software Versions

$ snakemake --version 8.23.0 $ mamba list | grep "snakemake-executor-plugin-slurm" $ conda list | grep "snakemake-executor-plugin-slurm" snakemake-executor-plugin-slurm 0.11.0 pyhdfd78af_0 bioconda snakemake-executor-plugin-slurm-jobstep 0.2.1 pyhdfd78af_0 bioconda $ sinfo --version slurm 24.05.3

Describe the bug When starting a snakemake workflow on a slurm cluster (surf/sara Snellius), the workflow starts but hangs on the initially submitted jobs (that seem to complete and disappear from the squeue overview). It's like Snakemake does not get the signal that the jobs finish. This is my config:

executor: slurm
default-resources:
  slurm_partition: "rome"
  time: 1h
  # slurm_extra: "'-o cluster_outputs/smk.{rule}.{jobid}.out -e cluster_outputs/smk.{rule}.{jobid}.err'"
printshellcmds: True
jobs: 100
restart-times: 3
latency-wait: 60
rerun-incomplete: True
use-conda: True
conda-prefix: /home/me/projects/snaqs_files/snakemake_envs

set-threads:
    salmon: 16

set-resources:
    # rule specific resources
  fastqc:
    slurm_partition: staging

Logs This is how it ends (I pressed ctrl-c at this line ^CTerminating processes on user request, this might take some time.):

[Tue Oct 15 18:56:16 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    jobid: 16
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h

        fastqc fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz --outdir=qc/fastqc

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
Job 16 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8178969 (log: /gpfs/home5/me/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8178969.log).
^CTerminating processes on user request, this might take some time.
WorkflowError:
Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8178965: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8178967: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8178969: command not found

Minimal example This is a big effort, if it's really required, I'll try to make a minimal pipelines. Apologies.

Additional context I have also posted a question on https://bioinformatics.stackexchange.com/questions/22963/snakemake-on-a-slurm-cluster, however there I ask for support for the generic executor as well (which doesn't let met specify the partition).

I think in general, some more extensive examples of profiles where all options are used would be nice to have. And, perhaps Snellius is deviating from normal Slurm?

cmeesters commented 1 month ago

Hi,

this is really weird. scancel as triggered by the plugin, does not have an --exclusive flag. And why should a SLURM cluster state, that the command sbatch is not found?

Right now, I am travelling, but I will find some time next week, to look into issues. Meanwhile, can you please indicate where you submitted your workflow (within a job or on a login/head node)? And perhaps run it in verbose mode (just add to the command line --verbose) and attach a full log of Snakemake, please?

I would like to know whether sbatch points to a binary or is overwritten with a wrapper (the rather informative output is not a default and admins have several methods to give you that feedback). Can you post the output of which sbatch, too?

I'm afraid, I am not familiar with Snellius, what is your output of sacct during or after the run (same day)? (Background: the plugin keeps track of job states using SLURM's accounting mechanism.)

freekvh commented 1 month ago

Hi, thank you for your fast reply! Here are my answers:

It's strange because the cluster-generic plugin works where the submit command is sbatch.

I submit from a login/head node. There are some restrictions there (like you can't run processes longer than 1 hour), but my tests finish in 5 min (with 10k reads files). Anyway, the cluster-generic executor works (but it does not select the right partitions for my lightweight jobs).

$ which sbatch
/usr/bin/sbatch
$ head -n 2 `which sbatch`
@)(@@@@@�@@@@ , ,00@0@__��@�@�L�L����@��@�����@��@�88@8@0hh@h@DDS�td88@8@0P�td����@��@��Q�tdR�td����@��@  /lib64/ld-linux-x86-64.so.2 GNU���GNUI�XG7���
��0>15o

(looks binary to me :))

This is during a run ->

$ sacct
JobID           JobName  Partition    Account  AllocCPUS      State ExitCode 
------------ ---------- ---------- ---------- ---------- ---------- -------- 
8183061      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183061.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183061.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183061.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183062      56d42312-+       rome    eccdcdc         16     FAILED      1:0 
8183062.bat+      batch               eccdcdc         16     FAILED      1:0 
8183062.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183062.0    python3.12               eccdcdc         16 OUT_OF_ME+    0:125 
8183063      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183063.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183063.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183063.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183064      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183064.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183064.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183064.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183065      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183065.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183065.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183065.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183066      56d42312-+       rome    eccdcdc         16     FAILED      1:0 
8183066.bat+      batch               eccdcdc         16     FAILED      1:0 
8183066.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183066.0    python3.12               eccdcdc         16 OUT_OF_ME+    0:125 
8183088      e985391f-+       rome    eccdcdc         16     FAILED      1:0 
8183088.bat+      batch               eccdcdc         16     FAILED      1:0 
8183088.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183088.0    python3.12               eccdcdc          1     FAILED      1:0 
8183089      e985391f-+       rome    eccdcdc         16  COMPLETED      0:0 
8183089.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183089.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183089.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183090      e985391f-+       rome    eccdcdc         16     FAILED      1:0 
8183090.bat+      batch               eccdcdc         16     FAILED      1:0 
8183090.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183090.0    python3.12               eccdcdc          1     FAILED      1:0 
8183091      e985391f-+       rome    eccdcdc         16  COMPLETED      0:0 
8183091.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183091.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183091.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183223      c91cbade-+       rome    eccdcdc          0    PENDING      0:0 
8183224      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183224.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183224.ext+     extern               eccdcdc          1    RUNNING      0:0 
8183225      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183225.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183225.ext+     extern               eccdcdc          1    RUNNING      0:0 
8183226      c91cbade-+       rome    eccdcdc          0    PENDING      0:0 
8183227      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183227.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183227.ext+     extern               eccdcdc          1    RUNNING      0:0 
8183228      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183228.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183228.ext+     extern               eccdcdc          1    RUNNING      0:0

I now wait for my processes to finish (no more jobs when checking with squeue), no new jobs are submitted... I then hit ctrl-c, the complete output, with --verbose is here:

$ snakemake --workflow-profile ./cluster_configs --verbose
Using workflow specific profile ./cluster_configs for setting default command line arguments.
host: int4
Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however important for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
shared_storage_local_copies: True
remote_exec: False
SLURM run ID: c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
Using shell: /usr/bin/bash
Provided remote nodes: 100
Job stats:
job                           count
--------------------------  -------
all                               1
complexity_20mer_counter          4
create_flagged_sampletable        1
create_pcs_raw_files              2
customqc_parameters               2
customqc_report                   1
fastqc                            4
qc_flagging                       2
rnaseq_multiqc                    1
salmon                            2
seqrun_expression_reports         1
tpm4_normalization                2
trim_galore                       2
total                            25

Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 9223372036854775807}
Ready jobs: 6
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/496608b438a441e8a9c28881aa8fdb12-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/496608b438a441e8a9c28881aa8fdb12-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 45 RHS
At line 49 BOUNDS
At line 56 ENDATA
Problem MODEL has 3 rows, 6 columns and 18 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 12 - 0.04 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 12 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                12.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.14
Time (Wallclock seconds):       0.09

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.20   (Wallclock seconds):       0.09

Selected jobs: 6
Resources after job selection: {'_cores': 9223372036854775801, '_nodes': 94, '_job_count': 9223372036854775807}
Execute 6 jobs...

[Wed Oct 16 09:10:46 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip
    jobid: 2
    benchmark: benchmark/0053_P2017BB3S19R_S1.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip qc/trim_galore

No SLURM account given, trying to guess.
Guessed SLURM account: eccdcdc
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S19R_S1/%j.log' --export=ALL --comment rule_trim_galore_wildcards_0053_P2017BB3S19R_S1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S19R_S1' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz' --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 2 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183223 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S19R_S1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183223.log).

[Wed Oct 16 09:10:46 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip
    jobid: 19
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h

        fastqc fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz --outdir=qc/fastqc

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R2/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S20R_S2_R2 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 19 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183224 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183224.log).

[Wed Oct 16 09:10:46 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    jobid: 18
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h

        fastqc fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz --outdir=qc/fastqc

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R1/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S20R_S2_R1 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 18 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183225 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183225.log).

[Wed Oct 16 09:10:47 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip
    jobid: 4
    benchmark: benchmark/0053_P2017BB3S20R_S2.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip qc/trim_galore

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S20R_S2/%j.log' --export=ALL --comment rule_trim_galore_wildcards_0053_P2017BB3S20R_S2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S20R_S2' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz' --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 4 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183226 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S20R_S2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183226.log).

[Wed Oct 16 09:10:47 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    jobid: 17
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h

        fastqc fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz --outdir=qc/fastqc

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R2/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S19R_S1_R2 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 17 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183227 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183227.log).

[Wed Oct 16 09:10:47 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    jobid: 16
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h

        fastqc fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz --outdir=qc/fastqc

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R1/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S19R_S1_R1 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 16 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183228 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183228.log).
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.03329920768737793 seconds
The output is:
'8183223|RUNNING
8183224|COMPLETED
8183225|COMPLETED
8183226|RUNNING
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'RUNNING', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'RUNNING', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.031170368194580078 seconds
The output is:
'8183223|COMPLETED
8183224|COMPLETED
8183225|COMPLETED
8183226|COMPLETED
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'COMPLETED', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'COMPLETED', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.032552242279052734 seconds
The output is:
'8183223|COMPLETED
8183224|COMPLETED
8183225|COMPLETED
8183226|COMPLETED
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'COMPLETED', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'COMPLETED', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.03798389434814453 seconds
The output is:
'8183223|COMPLETED
8183224|COMPLETED
8183225|COMPLETED
8183226|COMPLETED
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'COMPLETED', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'COMPLETED', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
^CTerminating processes on user request, this might take some time.
unlocking
removing lock
removing lock
removed all locks
Full Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 189, in schedule
    self._open_jobs.acquire()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 507, in acquire
    self._cond.wait(timeout)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 355, in wait
    waiter.acquire()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 416, in cancel_jobs
    subprocess.check_output(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 466, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'scancel sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183223 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183224 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183225 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183226 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183227 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183228 --clusters=all' returned non-zero exit status 127.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/cli.py", line 2091, in args_to_api
    dag_api.execute_workflow(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/api.py", line 595, in execute_workflow
    workflow.execute(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1264, in execute
    raise e
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1260, in execute
    success = self.scheduler.schedule()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 318, in schedule
    self._executor.cancel()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_interface_executor_plugins/executors/remote.py", line 109, in cancel
    self.cancel_jobs(active_jobs)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 429, in cancel_jobs
    raise WorkflowError(
snakemake_interface_common.exceptions.WorkflowError: Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183224: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183226: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183228: command not found

WorkflowError:
Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183224: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183226: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183228: command not found

Restart snakemake

If I then restart snakemake, it starts with the right tasks and finishes those, and then it sits again... Here is the output of the second start:

$ snakemake --workflow-profile ./cluster_configs --verbose
Using workflow specific profile ./cluster_configs for setting default command line arguments.
host: int4
Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however important for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
shared_storage_local_copies: True
remote_exec: False
SLURM run ID: 659f8275-565c-40d3-bdfb-2a9135623e26
Using shell: /usr/bin/bash
Provided remote nodes: 100
Job stats:
job                           count
--------------------------  -------
all                               1
complexity_20mer_counter          4
create_flagged_sampletable        1
create_pcs_raw_files              2
customqc_parameters               2
customqc_report                   1
qc_flagging                       2
rnaseq_multiqc                    1
salmon                            2
seqrun_expression_reports         1
tpm4_normalization                2
total                            19

Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 9223372036854775807}
Ready jobs: 6
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/115d8df167be4821887d4855d1c1c86b-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/115d8df167be4821887d4855d1c1c86b-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 16 COLUMNS
At line 89 RHS
At line 101 BOUNDS
At line 116 ENDATA
Problem MODEL has 11 rows, 14 columns and 38 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 72.0033 - 0.00 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 72.0033 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                72.00330996
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.00
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.14   (Wallclock seconds):       0.10

Selected jobs: 6
Resources after job selection: {'_cores': 9223372036854775771, '_nodes': 94, '_job_count': 9223372036854775807}
Execute 6 jobs...

[Wed Oct 16 09:16:04 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
    jobid: 12
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt

No SLURM account given, trying to guess.
Guessed SLURM account: eccdcdc
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R2/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S20R_S2_R2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 12 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183255 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183255.log).

[Wed Oct 16 09:16:04 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    jobid: 1
    benchmark: benchmark/0053_P2017BB3S19R_S1.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    wildcards: sample=0053_P2017BB3S19R_S1
    threads: 16
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S19R_S1

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S19R_S1/%j.log' --export=ALL --comment rule_salmon_wildcards_0053_P2017BB3S19R_S1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=16 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S19R_S1' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 1 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183256 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S19R_S1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183256.log).

[Wed Oct 16 09:16:04 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
    jobid: 9
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R1/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S19R_S1_R1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 9 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183257 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183257.log).

[Wed Oct 16 09:16:05 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
    jobid: 10
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R2/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S19R_S1_R2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 10 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183258 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183258.log).

[Wed Oct 16 09:16:05 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
    jobid: 11
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R1/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S20R_S2_R1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 11 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183259 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183259.log).

[Wed Oct 16 09:16:05 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    jobid: 3
    benchmark: benchmark/0053_P2017BB3S20R_S2.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    wildcards: sample=0053_P2017BB3S20R_S2
    threads: 16
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h

        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S20R_S2

No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S20R_S2/%j.log' --export=ALL --comment rule_salmon_wildcards_0053_P2017BB3S20R_S2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=16 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S20R_S2' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 3 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183260 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S20R_S2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183260.log).
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.0313105583190918 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.03162860870361328 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.02939462661743164 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.03914189338684082 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.03030538558959961 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
^CTerminating processes on user request, this might take some time.
unlocking
removing lock
removing lock
removed all locks
Full Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 189, in schedule
    self._open_jobs.acquire()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 507, in acquire
    self._cond.wait(timeout)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 355, in wait
    waiter.acquire()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 416, in cancel_jobs
    subprocess.check_output(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 466, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'scancel sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183255 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183256 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183257 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183258 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183259 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183260 --clusters=all' returned non-zero exit status 127.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/cli.py", line 2091, in args_to_api
    dag_api.execute_workflow(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/api.py", line 595, in execute_workflow
    workflow.execute(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1264, in execute
    raise e
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1260, in execute
    success = self.scheduler.schedule()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 318, in schedule
    self._executor.cancel()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_interface_executor_plugins/executors/remote.py", line 109, in cancel
    self.cancel_jobs(active_jobs)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 429, in cancel_jobs
    raise WorkflowError(
snakemake_interface_common.exceptions.WorkflowError: Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183256: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183258: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183260: command not found

WorkflowError:
Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183256: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183258: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183260: command not found

Looks like Salmon did not poduce the expected output? (Or it's deleted by Snakemake because I see some "untracked" files, not sure what happened, I do know the workflow work in other ways so it's not that).

freekvh commented 1 month ago

Maybe nice for reference, this is the same workflow with the generic-cluster executor (which finished succesfully).

The configuration:

executor: cluster-generic
cluster-generic-submit-cmd:
  sbatch
    --cpus-per-task=16
    --job-name={rule}-{jobid}
    --output=cluster_outputs/{rule}/{rule}-{wildcards}-%j.out
    --parsable
    --partition=rome
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 100 # Check what the max is
keep-going: True
rerun-incomplete: True
printshellcmds: True
use-conda: True
conda-prefix: /home/fvhemert/projects/snaqs_files/snakemake_envs

set-threads:
  salmon: 16

set-resources:
  fastqc:
    partition: staging

The workflow:

$ snakemake --workflow-profile ./cluster_configs --verbose
Using workflow specific profile ./cluster_configs for setting default command line arguments.
host: int4
Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however important for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
shared_storage_local_copies: True
remote_exec: False
Using shell: /usr/bin/bash
Provided remote nodes: 100
Job stats:
job                           count
--------------------------  -------
all                               1
complexity_20mer_counter          4
create_flagged_sampletable        1
create_pcs_raw_files              2
customqc_parameters               2
customqc_report                   1
fastqc                            4
qc_flagging                       2
rnaseq_multiqc                    1
salmon                            2
seqrun_expression_reports         1
tpm4_normalization                2
trim_galore                       2
total                            25

Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 9223372036854775807}
Ready jobs: 6
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/6fb55e55c9e14a9489496e0e262ab7c5-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/6fb55e55c9e14a9489496e0e262ab7c5-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 45 RHS
At line 49 BOUNDS
At line 56 ENDATA
Problem MODEL has 3 rows, 6 columns and 18 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 12 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 12 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                12.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.09
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.20   (Wallclock seconds):       0.09

Selected jobs: 6
Resources after job selection: {'_cores': 9223372036854775801, '_nodes': 94, '_job_count': 10}
Execute 6 jobs...

[Wed Oct 16 09:56:29 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    jobid: 18
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: tmpdir=<TBD>, partition=staging

        fastqc fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz --outdir=qc/fastqc

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 18}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/18.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/18.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 18 with external jobid '8183872'.

[Wed Oct 16 09:56:29 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip
    jobid: 4
    benchmark: benchmark/0053_P2017BB3S20R_S2.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip qc/trim_galore

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "trim_galore", "local": false, "input": ["fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz", "fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz"], "output": ["fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {"trimming_report_read1": "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "trimming_report_read2": "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "fastqc_html_read1": "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "fastqc_html_read2": "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "fastqc_zip_read1": "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "fastqc_zip_read2": "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip"}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 4}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S20R_S2' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/4.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/4.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 4 with external jobid '8183873'.

[Wed Oct 16 09:56:29 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    jobid: 17
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: tmpdir=<TBD>, partition=staging

        fastqc fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz --outdir=qc/fastqc

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 17}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/17.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/17.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 17 with external jobid '8183874'.

[Wed Oct 16 09:56:29 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    jobid: 16
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: tmpdir=<TBD>, partition=staging

        fastqc fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz --outdir=qc/fastqc

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 16}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/16.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/16.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 16 with external jobid '8183875'.

[Wed Oct 16 09:56:30 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip
    jobid: 2
    benchmark: benchmark/0053_P2017BB3S19R_S1.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip qc/trim_galore

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "trim_galore", "local": false, "input": ["fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz", "fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz"], "output": ["fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {"trimming_report_read1": "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "trimming_report_read2": "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "fastqc_html_read1": "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "fastqc_html_read2": "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "fastqc_zip_read1": "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "fastqc_zip_read2": "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip"}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 2}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S19R_S1' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/2.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/2.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 2 with external jobid '8183876'.

[Wed Oct 16 09:56:30 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip
    jobid: 19
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: tmpdir=<TBD>, partition=staging

        fastqc fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz --outdir=qc/fastqc

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 19}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/19.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/19.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 19 with external jobid '8183877'.
[Wed Oct 16 09:57:24 2024]
Finished job 18.
1 of 25 steps (4%) done
[Wed Oct 16 09:57:25 2024]
Finished job 4.
2 of 25 steps (8%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz.
Resources before job selection: {'_cores': 9223372036854775803, '_nodes': 96, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/83010442c6a54ae5a8b39f8d6f21fa28-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/83010442c6a54ae5a8b39f8d6f21fa28-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 12 COLUMNS
At line 49 RHS
At line 57 BOUNDS
At line 65 ENDATA
Problem MODEL has 7 rows, 7 columns and 19 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 36.0016 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 36.0016 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                36.00164002
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.11
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.22   (Wallclock seconds):       0.09

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775785, '_nodes': 93, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 09:57:25 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
    jobid: 11
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: tmpdir=<TBD>

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 11}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/11.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/11.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 11 with external jobid '8183882'.

[Wed Oct 16 09:57:25 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    jobid: 3
    benchmark: benchmark/0053_P2017BB3S20R_S2.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf; Input files updated by another job: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S20R_S2
    threads: 16
    resources: tmpdir=<TBD>

        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S20R_S2

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "salmon", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz"], "output": ["analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {"salmon_index": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index", "gtf_file": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf"}, "log": [], "threads": 16, "resources": {"tmpdir": "<TBD>"}, "jobid": 3}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S20R_S2' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/3.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/3.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 3 with external jobid '8183883'.

[Wed Oct 16 09:57:25 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
    jobid: 12
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: tmpdir=<TBD>

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 12}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/12.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/12.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 12 with external jobid '8183884'.
[Wed Oct 16 09:57:26 2024]
Finished job 17.
3 of 25 steps (12%) done
[Wed Oct 16 09:57:27 2024]
Finished job 16.
4 of 25 steps (16%) done
[Wed Oct 16 09:57:28 2024]
Finished job 2.
5 of 25 steps (20%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz.
Resources before job selection: {'_cores': 9223372036854775788, '_nodes': 96, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/ab50c915c1604451a597f96291d02981-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/ab50c915c1604451a597f96291d02981-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 12 COLUMNS
At line 49 RHS
At line 57 BOUNDS
At line 65 ENDATA
Problem MODEL has 7 rows, 7 columns and 19 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 36.0017 - 0.00 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 36.0017 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                36.00166994
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.00
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.00   (Wallclock seconds):       0.00

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775770, '_nodes': 93, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 09:57:28 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    jobid: 1
    benchmark: benchmark/0053_P2017BB3S19R_S1.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf; Input files updated by another job: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S19R_S1
    threads: 16
    resources: tmpdir=<TBD>

        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S19R_S1

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "salmon", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz"], "output": ["analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {"salmon_index": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index", "gtf_file": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf"}, "log": [], "threads": 16, "resources": {"tmpdir": "<TBD>"}, "jobid": 1}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S19R_S1' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/1.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/1.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 1 with external jobid '8183885'.

[Wed Oct 16 09:57:28 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
    jobid: 9
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: tmpdir=<TBD>

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 9}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/9.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/9.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 9 with external jobid '8183886'.

[Wed Oct 16 09:57:28 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
    jobid: 10
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: tmpdir=<TBD>

        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 10}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/10.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/10.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 10 with external jobid '8183887'.
[Wed Oct 16 09:57:29 2024]
Finished job 19.
6 of 25 steps (24%) done
[Wed Oct 16 09:57:54 2024]
Finished job 11.
7 of 25 steps (28%) done
[Wed Oct 16 09:58:24 2024]
Finished job 12.
8 of 25 steps (32%) done
[Wed Oct 16 09:58:26 2024]
Finished job 9.
9 of 25 steps (36%) done
[Wed Oct 16 09:58:27 2024]
Finished job 10.
10 of 25 steps (40%) done
[Wed Oct 16 09:59:10 2024]
Finished job 3.
11 of 25 steps (44%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz.
Resources before job selection: {'_cores': 9223372036854775791, '_nodes': 99, '_job_count': 10}
Ready jobs: 1
Select jobs to execute...
Selecting jobs to run using greedy solver.
Selected jobs: 1
Resources after job selection: {'_cores': 9223372036854775790, '_nodes': 98, '_job_count': 10}
Execute 1 jobs...

[Wed Oct 16 09:59:10 2024]
rule tpm4_normalization:
    input: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    jobid: 8
    benchmark: benchmark/0053_P2017BB3S20R_S2.tpm4_normalization_salmon.tpm4_normalization.benchmark.tsv
    reason: Missing output files: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results; Input files updated by another job: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "tpm4_normalization", "local": false, "input": ["analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 8}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'tpm4_normalization:sample=0053_P2017BB3S20R_S2' --allowed-rules 'tpm4_normalization' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/8.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/8.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 8 with external jobid '8183922'.
[Wed Oct 16 09:59:44 2024]
Finished job 1.
12 of 25 steps (48%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz.
Resources before job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Ready jobs: 2
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/955cce5d485c44c5b6356a7e25d76490-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/955cce5d485c44c5b6356a7e25d76490-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 21 RHS
At line 25 BOUNDS
At line 28 ENDATA
Problem MODEL has 3 rows, 2 columns and 6 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 4 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 4 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                4.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.11
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.20   (Wallclock seconds):       0.01

Selected jobs: 2
Resources after job selection: {'_cores': 9223372036854775804, '_nodes': 97, '_job_count': 10}
Execute 2 jobs...

[Wed Oct 16 09:59:44 2024]
rule tpm4_normalization:
    input: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    output: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    jobid: 6
    benchmark: benchmark/0053_P2017BB3S19R_S1.tpm4_normalization_salmon.tpm4_normalization.benchmark.tsv
    reason: Missing output files: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results; Input files updated by another job: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "tpm4_normalization", "local": false, "input": ["analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf"], "output": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 6}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'tpm4_normalization:sample=0053_P2017BB3S19R_S1' --allowed-rules 'tpm4_normalization' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/6.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/6.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 6 with external jobid '8183926'.

[Wed Oct 16 09:59:44 2024]
rule rnaseq_multiqc:
    input: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: qc/multiqc_report.html
    jobid: 21
    benchmark: benchmark/rnaseq_multiqc_salmon.rnaseq_multiqc.benchmark.tsv
    reason: Missing output files: qc/multiqc_report.html; Input files updated by another job: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html
    resources: tmpdir=<TBD>

        export LC_ALL=en_US.UTF-8
        export LANG=en_US.UTF-8
        multiqc qc analyzed -o qc -f

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "rnaseq_multiqc", "local": false, "input": ["qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip", "analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf", "analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["qc/multiqc_report.html"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 21}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'rnaseq_multiqc:' --allowed-rules 'rnaseq_multiqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.rnaseq_multiqc.21.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/21.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/21.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 21 with external jobid '8183927'.
[Wed Oct 16 10:00:20 2024]
Finished job 21.
13 of 25 steps (52%) done
[Wed Oct 16 10:00:41 2024]
Finished job 8.
14 of 25 steps (56%) done
Resources before job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Ready jobs: 2
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/f7d4cc79144d4ae9bfc9d51ff453c539-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/f7d4cc79144d4ae9bfc9d51ff453c539-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 21 RHS
At line 25 BOUNDS
At line 28 ENDATA
Problem MODEL has 3 rows, 2 columns and 6 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 4 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 4 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                4.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.06
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.09   (Wallclock seconds):       0.01

Selected jobs: 2
Resources after job selection: {'_cores': 9223372036854775804, '_nodes': 97, '_job_count': 10}
Execute 2 jobs...

[Wed Oct 16 10:00:41 2024]
rule customqc_parameters:
    input: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    output: qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz
    jobid: 15
    benchmark: benchmark/0053_P2017BB3S20R_S2.customqc_parameters.customqc_parameters.benchmark.tsv
    reason: Missing output files: qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png; Input files updated by another job: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "customqc_parameters", "local": false, "input": ["analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "output": ["qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 15}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'customqc_parameters:sample=0053_P2017BB3S20R_S2' --allowed-rules 'customqc_parameters' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/15.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/15.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 15 with external jobid '8183955'.

[Wed Oct 16 10:00:41 2024]
rule create_pcs_raw_files:
    input: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    output: pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv
    jobid: 7
    benchmark: benchmark/0053_P2017BB3S20R_S2.create_pcs_raw_files.create_pcs_raw_files.benchmark.tsv
    reason: Missing output files: pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv; Input files updated by another job: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "create_pcs_raw_files", "local": false, "input": ["analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "output": ["pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 7}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'create_pcs_raw_files:sample=0053_P2017BB3S20R_S2' --allowed-rules 'create_pcs_raw_files' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/7.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/7.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 7 with external jobid '8183956'.
[Wed Oct 16 10:01:17 2024]
Finished job 7.
15 of 25 steps (60%) done
[Wed Oct 16 10:01:18 2024]
Finished job 6.
16 of 25 steps (64%) done
Resources before job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/44f15623e1c74abe93f61157816bbb61-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/44f15623e1c74abe93f61157816bbb61-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 27 RHS
At line 31 BOUNDS
At line 35 ENDATA
Problem MODEL has 3 rows, 3 columns and 9 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 6 - 0.03 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 6 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                6.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.08
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.15   (Wallclock seconds):       0.10

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775803, '_nodes': 96, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 10:01:18 2024]
rule customqc_parameters:
    input: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    output: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz
    jobid: 14
    benchmark: benchmark/0053_P2017BB3S19R_S1.customqc_parameters.customqc_parameters.benchmark.tsv
    reason: Missing output files: qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz; Input files updated by another job: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "customqc_parameters", "local": false, "input": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results"], "output": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 14}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'customqc_parameters:sample=0053_P2017BB3S19R_S1' --allowed-rules 'customqc_parameters' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/14.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/14.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 14 with external jobid '8183962'.

[Wed Oct 16 10:01:18 2024]
rule seqrun_expression_reports:
    input: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results, analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    output: /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv
    jobid: 23
    benchmark: benchmark/seqrun_expression_reports.seqrun_expression_reports.benchmark.tsv
    reason: Missing output files: /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv; Input files updated by another job: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results, analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "seqrun_expression_reports", "local": false, "input": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results", "analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "output": ["/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 23}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'seqrun_expression_reports:' --allowed-rules 'seqrun_expression_reports' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results' 'analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/23.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/23.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 23 with external jobid '8183963'.

[Wed Oct 16 10:01:18 2024]
rule create_pcs_raw_files:
    input: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    output: pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv
    jobid: 5
    benchmark: benchmark/0053_P2017BB3S19R_S1.create_pcs_raw_files.create_pcs_raw_files.benchmark.tsv
    reason: Missing output files: pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv; Input files updated by another job: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "create_pcs_raw_files", "local": false, "input": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results"], "output": ["pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 5}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'create_pcs_raw_files:sample=0053_P2017BB3S19R_S1' --allowed-rules 'create_pcs_raw_files' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/5.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/5.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 5 with external jobid '8183964'.
[Wed Oct 16 10:01:43 2024]
Finished job 23.
17 of 25 steps (68%) done
[Wed Oct 16 10:01:44 2024]
Finished job 5.
18 of 25 steps (72%) done
[Wed Oct 16 10:02:16 2024]
Finished job 15.
19 of 25 steps (76%) done
[Wed Oct 16 10:02:47 2024]
Finished job 14.
20 of 25 steps (80%) done
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/8f4ef96319b64ee79f501ed3d3b536ad-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/8f4ef96319b64ee79f501ed3d3b536ad-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 27 RHS
At line 31 BOUNDS
At line 35 ENDATA
Problem MODEL has 3 rows, 3 columns and 9 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 6 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 6 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                6.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.11
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.17   (Wallclock seconds):       0.01

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775804, '_nodes': 97, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 10:02:47 2024]
rule customqc_report:
    input: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz
    output: qc/customqc_report.html, qc/customqc/cumulative_percentage_of_raw_reads.png, qc/customqc/normalized_refgene_pattern_all.png, qc/customqc/refgene_pattern_all.png
    jobid: 22
    benchmark: benchmark/customqc_report_salmon.customqc_report.benchmark.tsv
    reason: Missing output files: qc/customqc_report.html; Input files updated by another job: qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "customqc_report", "local": false, "input": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz"], "output": ["qc/customqc_report.html", "qc/customqc/cumulative_percentage_of_raw_reads.png", "qc/customqc/normalized_refgene_pattern_all.png", "qc/customqc/refgene_pattern_all.png"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 22}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'customqc_report:' --allowed-rules 'customqc_report' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.customqc_report.22.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/22.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/22.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 22 with external jobid '8183987'.

[Wed Oct 16 10:02:48 2024]
rule qc_flagging:
    input: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json
    jobid: 13
    benchmark: benchmark/0053_P2017BB3S19R_S1.qc_flagging_salmon.qc_flagging.benchmark.tsv
    reason: Missing output files: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json; Input files updated by another job: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "qc_flagging", "local": false, "input": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip", "qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip", "analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf", "analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 13}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'qc_flagging:sample=0053_P2017BB3S19R_S1' --allowed-rules 'qc_flagging' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.qc_flagging.13.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/13.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/13.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 13 with external jobid '8183988'.

[Wed Oct 16 10:02:48 2024]
rule qc_flagging:
    input: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json
    jobid: 20
    benchmark: benchmark/0053_P2017BB3S20R_S2.qc_flagging_salmon.qc_flagging.benchmark.tsv
    reason: Missing output files: qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json; Input files updated by another job: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "qc_flagging", "local": false, "input": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip", "qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip", "analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf", "analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 20}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'qc_flagging:sample=0053_P2017BB3S20R_S2' --allowed-rules 'qc_flagging' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.qc_flagging.20.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/20.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/20.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 20 with external jobid '8183989'.
[Wed Oct 16 10:03:21 2024]
Finished job 22.
21 of 25 steps (84%) done
[Wed Oct 16 10:03:22 2024]
Finished job 13.
22 of 25 steps (88%) done
[Wed Oct 16 10:03:23 2024]
Finished job 20.
23 of 25 steps (92%) done
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 10}
Ready jobs: 1
Select jobs to execute...
Selecting jobs to run using greedy solver.
Selected jobs: 1
Resources after job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Execute 1 jobs...

[Wed Oct 16 10:03:23 2024]
rule create_flagged_sampletable:
    input: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json
    output: results/test_pipeline_samples.txt
    jobid: 24
    benchmark: benchmark/create_flagged_sampletable.create_flagged_sampletable.benchmark.tsv
    reason: Missing output files: results/test_pipeline_samples.txt; Input files updated by another job: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "create_flagged_sampletable", "local": false, "input": ["qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json", "qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json"], "output": ["results/test_pipeline_samples.txt"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 24}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'create_flagged_sampletable:' --allowed-rules 'create_flagged_sampletable' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json' 'qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/24.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/24.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 24 with external jobid '8183997'.
[Wed Oct 16 10:03:43 2024]
Finished job 24.
24 of 25 steps (96%) done
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 10}
Ready jobs: 1
Select jobs to execute...
Selecting jobs to run using greedy solver.
Selected jobs: 1
Resources after job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Execute 1 jobs...

[Wed Oct 16 10:03:43 2024]
localrule all:
    input: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv, pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json, qc/multiqc_report.html, qc/customqc_report.html, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv, results/test_pipeline_samples.txt
    jobid: 0
    reason: Input files updated by another job: qc/customqc_report.html, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv, results/test_pipeline_samples.txt, qc/multiqc_report.html, pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv, pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json
    resources: tmpdir=/scratch-local/70716

[Wed Oct 16 10:03:43 2024]
Finished job 0.
25 of 25 steps (100%) done
Complete log: .snakemake/log/2024-10-16T095626.528041.snakemake.log
unlocking
removing lock
removing lock
removed all locks
cmeesters commented 1 month ago

This does not make any sense to me: scancel sbatch .... You can clearly see, all the code does is to attempt to cancel job ids. Is the cluster-generic code in the same environment? Or configuration components, thereof?

freekvh commented 1 month ago

This does not make any sense to me: scancel sbatch .... You can clearly see, all the code does is to attempt to cancel job ids. Is the cluster-generic code in the same environment? Or configuration components, thereof?

It is the same environment and the same pipeline, I just change the config.yaml. The cancelling is because I ctrl-c the main process, because it appears to hang (no follow-up jobs are submitted). f I ctrl-c the cluster-generic workflow it ends as expected:

sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 10 with external jobid '8189916'.
[Wed Oct 16 20:03:09 2024]
Finished job 19.
6 of 25 steps (24%) done
^CTerminating processes on user request, this might take some time.
No --cluster-cancel given. Will exit after finishing currently running jobs.
Complete log: .snakemake/log/2024-10-16T200207.048442.snakemake.log
WorkflowError:
At least one job did not complete successfully.
(snaqs) [fvhemert@int4 test_pipeline]$ 

It could be me... if you have a working config.yaml I could also try that? I didn't find many examples...

r-blanchet commented 1 month ago

Hello, I do have the same problem, from slurm plugin version 0.10.0. Downgrading to 0.9. Fixed the issue....

freekvh commented 1 month ago

Hello, I do have the same problem, from slurm plugin version 0.10.0. Downgrading to 0.9. Fixed the issue....

This prompted me to test...

@cmeesters let me know if you need more info to pinpoint this, or if you want me to test something. For now I will get going with 0.8.0.

cmeesters commented 1 month ago

What happens if you are working in separate environments (one with the slurm executor plugin and without the generic cluster support (which is deprecated) and vice versa)? The same goes, obviously, for all configuration options.

Yes, better configuration documentation is on my to-do list. I hope not to be alone in this support endeavour for long.

freekvh commented 1 month ago

I'm prepping for a holiday atm. I look into it soon. You mean that I should make an env that does not have the generic cluster executor right, then test 0.11.0 again? Not sure if I had it installed but I'll check.

At the moment my config is becoming quite extensive, I can share it if you want? Here? Or make a PR with an "Examples" folder?

cmeesters commented 1 month ago

Yes, take an env without the generic cluster executor and then test the (then current) version of this executor. Also, have a clean config (e.g. comment out all settings for the generic executor).

If you want to share it, either append a file or upload a gist and point us to the URL — a PR is definitively the wrong place.

@r-blanchet downgrading and saying “it works!” is fine to get going, but not helpful for a maintainer in general, without knowing that there is an issue. All features for this plugin are tested extensively before merging. However, clusters are different, we are not able to test every detail in our CI pipeline and some side effects might slip through. Please consider opening a specific issue report and provide details. Thank you.

cmeesters commented 3 weeks ago

@freekvh if, after cleaning the env, the issue persist, well if you want to we can try to debug this in a video call - might take a while, though. In this case drop me a mail.

freekvh commented 3 weeks ago

@cmeesters Starting with a clean env didn't help. I'll drop you an email soon, let's try to solve this.

ifariasg commented 3 weeks ago

Hello,

This seems very similar to the issue I reported in #127 and oddly enough, I also run my workflow in Snellius. Maybe in the issue I reported I was not able to explain the problem thoroughly. @cmeesters, you mentioned that this was indeed intended behavior? Might the flag--slurm-cancel-workflow-upon-failure then be an alternative workaround @freekvh ?

freekvh commented 3 weeks ago

Hi @ifariasg , thank you for your feedback, after reading your issue, I don't think it is the same. I have always used --keep-going and that works for version 0.8.0 of the Slurm Executor. slurm-cancel-workflow-upon-failure sounds like something I don't want, I want the workflow to get as far as possible when something has an error.

Just to be sure I added slurm-cancel-workflow-upon-failure: True to my config and re-tested 0.11.0, and it gives me this error: snakemake: error: unrecognized arguments: --slurm-cancel-workflow-upon-failure=True, adding it to the snakemake command I get snakemake: error: unrecognized arguments: --slurm-cancel-workflow-upon-failure.

Perhaps I'm not fully understanding what you mean :)

ifariasg commented 3 weeks ago

Hi @ifariasg , thank you for your feedback, after reading your issue, I don't think it is the same. I have always used --keep-going and that works for version 0.8.0 of the Slurm Executor. slurm-cancel-workflow-upon-failure sounds like something I don't want, I want the workflow to get as far as possible when something has an error.

Just to be sure I added slurm-cancel-workflow-upon-failure: True to my config and re-tested 0.11.0, and it gives me this error: snakemake: error: unrecognized arguments: --slurm-cancel-workflow-upon-failure=True, adding it to the snakemake command I get snakemake: error: unrecognized arguments: --slurm-cancel-workflow-upon-failure.

Perhaps I'm not fully understanding what you mean :)

@freekvh I think we are talking about the same! I also always use '--keep-going'. A while back I started testing snakemake 8.X (did not get very deep into testing as I had other errors also), and what would happen to me is that my first job (consisting of 1 rule) of a very long workflow would fail and snakemake would never detect that the job failed. In my case this first rule is needed for all other downstream rules, so snakemake should terminate because there is nothing else to do (standard behavior in SM7.X)

Seeing the issue that you have now, I suspect that in Snellius the executor plugin (I tested v0.9) was not detecting anything, whether it is a successful first job (your case) or a failed job (my case in #127 ).

The solution that @cmeesters proposed was this new flag to cancel upon failure, which still seems odd to me but served my purpose. In your case, since the job never failed, it does not make a difference. I'm just suggesting that the culprit of both problems maybe is the same?

freekvh commented 3 weeks ago

@ifariasg Ah yes, I understand now. Perhaps indeed.

Btw, since you are also using Snellius, may I ask you some other Snakemake questions? My main issue now is job grouping, I already posted on bioinformatics.stackexchange.com. Perhaps you have some working examples online?

The Snellius admins are not big fans of Snakemake but after asking some questions it's just because it is very easy to get started without understanding the underlying infra and fire off a load of jobs that request much more resources than needed, and they aren't able to debug Snakemake workflows themselves.

In my case I currently struggle with their minimum 16 cores requirement, since I have (like most bioinformaticians) a large process (mapping/quantifying) and then a lot of small 1 core/low ram QC jobs (but I'm just getting started on Snellius, the previous Slurm cluster I used had no problem with 1 core jobs). Honestly I don't understand that 16 core minimum, when I asked about it, they said "with such jobs you can just use a laptop". Yeah, of course, but that would be pretty inconvenient, plus there are really a lot of small jobs...

Anyway, job grouping probably solves all that since it gives you (a lot) less jobs in the queue, allows for some caching on a node's local disk between "jobs" and generally gives a lot less overhead. So it's a good thing regardless and I understand why the admins want me to use that instead of making 1 core available.

I was thinking to write a bit about this when I get everything working nicely, perhaps start a Snakemake community for Snellius, the admins certainly would appreciate better "support" for it and a place to send people.

cmeesters commented 3 weeks ago

@freekvh I prepared a PR (#161), but it is not thoroughly tested — yet. If you pull the code and switch to the branch, you can run poetry install in your environment, which sets the path to your git repo clone. Please test and report back.

As for snellius: I am happy to discuss issues with my colleagues over at SURF. As discussed, job arrays and pooled jobs are on their way (albeit not too soon, for my schedule is dense). I/we offer courses, too. This includes some 30 min + discussions on site for admins ;-).

freekvh commented 3 weeks ago

Hi @cmeesters I tried using poetry, using this pyproject.toml:

[tool.poetry]
name = "snaqs"
version = "2.0.0"
description = ""
authors = ["Freek... <freek@x.x>"]
readme = "README.md"
packages = []
# package-mode = false

[tool.poetry.dependencies]
python = "^3.11"
pandas = "*"
snakemake = "=8.25.0"
snakemake-executor-plugin-slurm = {git = "git@github.com:snakemake/snakemake-executor-plugin-slurm.git", branch='fix/sbatch-stderr-parsing'}

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

It fails with this error:

(base) [fvhemert@int6 snaqs]$ poetry install
The currently activated Python version 3.10.14 is not supported by the project (^3.11).
Trying to find and use a compatible version. 
Using python3.11 (3.11.7)
Creating virtualenv snaqs-Oei8VcMh-py3.11 in /home/fvhemert/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies... (64.8s)
Resolving dependencies... (77.9s)

Package operations: 54 installs, 0 updates, 0 removals

  - Installing attrs (24.2.0)
  - Installing rpds-py (0.20.1)
  - Installing argparse-dataclass (2.0.0)
  - Installing configargparse (1.7)
  - Installing referencing (0.35.1)
  - Installing jsonschema-specifications (2024.10.1)
  - Installing platformdirs (4.3.6)
  - Installing smmap (5.0.1)
  - Installing snakemake-interface-common (1.17.4)
  - Installing throttler (1.2.2)
  - Installing traitlets (5.14.3)
  - Installing certifi (2024.8.30)
  - Installing charset-normalizer (3.4.0)
  - Installing dpath (2.2.0)
  - Installing fastjsonschema (2.20.0)
  - Installing gitdb (4.0.11)
  - Installing idna (3.10)
  - Installing jsonschema (4.23.0)
  - Installing jupyter-core (5.7.2)
  - Installing markupsafe (3.0.2)
  - Installing plac (1.4.3)
  - Installing pyyaml (6.0.2)
  - Installing reretry (0.11.8)
  - Installing six (1.16.0)
  - Installing snakemake-interface-executor-plugins (9.3.2)
  - Installing urllib3 (2.2.3)
  - Installing wrapt (1.16.0)
  - Installing appdirs (1.4.4)
  - Installing conda-inject (1.3.2)
  - Installing connection-pool (0.0.3)
  - Installing datrie (0.8.2): Failed

  ChefBuildError

  Backend subprocess exited when trying to invoke build_wheel

  /scratch-local/70716/tmp4nt0tivw/.venv/lib/python3.11/site-packages/setuptools/_distutils/dist.py:261: UserWarning: Unknown distribution option: 'tests_require'
    warnings.warn(msg)
  running bdist_wheel
  running build
  running build_clib
  building 'datrie' library
  creating build/temp.linux-x86_64-cpython-311/libdatrie/datrie
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -c libdatrie/datrie/alpha-map.c -o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/alpha-map.o
  libdatrie/datrie/alpha-map.c: In function ‘alpha_map_char_to_trie’:
  libdatrie/datrie/alpha-map.c:500:21: warning: comparison of integer expressions of different signedness: ‘TrieIndex’ {aka ‘int’} and ‘AlphaChar’ {aka ‘unsigned int’} [-Wsign-compare]
    500 |     if (alpha_begin <= ac && ac <= alpha_map->alpha_end)
        |                     ^~
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -c libdatrie/datrie/darray.c -o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/darray.o
  libdatrie/datrie/darray.c: In function ‘da_fread’:
  libdatrie/datrie/darray.c:239:22: warning: comparison of integer expressions of different signedness: ‘TrieIndex’ {aka ‘int’} and ‘long unsigned int’ [-Wsign-compare]
    239 |     if (d->num_cells > SIZE_MAX / sizeof (DACell))
        |                      ^
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -c libdatrie/datrie/dstring.c -o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/dstring.o
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -c libdatrie/datrie/fileutils.c -o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/fileutils.o
  libdatrie/datrie/fileutils.c: In function ‘file_read_chars’:
  libdatrie/datrie/fileutils.c:103:52: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘long unsigned int’} and ‘int’ [-Wsign-compare]
    103 |     return (fread (buff, sizeof (char), len, file) == len);
        |                                                    ^~
  libdatrie/datrie/fileutils.c: In function ‘file_write_chars’:
  libdatrie/datrie/fileutils.c:109:53: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘long unsigned int’} and ‘int’ [-Wsign-compare]
    109 |     return (fwrite (buff, sizeof (char), len, file) == len);
        |                                                     ^~
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -c libdatrie/datrie/tail.c -o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/tail.o
  libdatrie/datrie/tail.c: In function ‘tail_fread’:
  libdatrie/datrie/tail.c:144:22: warning: comparison of integer expressions of different signedness: ‘TrieIndex’ {aka ‘int’} and ‘long unsigned int’ [-Wsign-compare]
    144 |     if (t->num_tails > SIZE_MAX / sizeof (TailBlock))
        |                      ^
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -c libdatrie/datrie/trie-string.c -o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/trie-string.o
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -c libdatrie/datrie/trie.c -o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/trie.o
  ar rcs build/temp.linux-x86_64-cpython-311/libdatrie.a build/temp.linux-x86_64-cpython-311/libdatrie/datrie/alpha-map.o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/darray.o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/dstring.o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/fileutils.o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/tail.o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/trie-string.o build/temp.linux-x86_64-cpython-311/libdatrie/datrie/trie.o
  running build_ext
  building 'datrie' extension
  creating build/temp.linux-x86_64-cpython-311/src
  gcc -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ilibdatrie -I/scratch-local/70716/tmp4nt0tivw/.venv/include -I/usr/include/python3.11 -c src/datrie.c -o build/temp.linux-x86_64-cpython-311/src/datrie.o
  src/datrie.c:36:10: fatal error: Python.h: No such file or directory
     36 | #include "Python.h"
        |          ^~~~~~~~~~
  compilation terminated.
  error: command '/usr/bin/gcc' failed with exit code 1

  at /gpfs/home5/fvhemert/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/chef.py:164 in _prepare
      160│ 
      161│                 error = ChefBuildError("\n\n".join(message_parts))
      162│ 
      163│             if error is not None:
    → 164│                 raise error from None
      165│ 
      166│             return path
      167│ 
      168│     def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:

Note: This error originates from the build backend, and is likely not a problem with poetry but with datrie (0.8.2) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "datrie (==0.8.2)"'.

  - Installing docutils (0.21.2)
  - Installing gitpython (3.1.43)
  - Installing humanfriendly (10.0)
  - Installing immutables (0.21)
  - Installing jinja2 (3.1.4)
  - Installing nbformat (5.10.4)
  - Installing numpy (2.1.2)
  - Installing packaging (24.1)
  - Installing psutil (6.1.0)
  - Installing pulp (2.9.0)
  - Installing python-dateutil (2.9.0.post0)
  - Installing pytz (2024.2)
  - Installing requests (2.32.3)
  - Installing smart-open (7.0.5)
  - Installing snakemake-executor-plugin-slurm-jobstep (0.2.1)
  - Installing snakemake-interface-report-plugins (1.1.0)
  - Installing snakemake-interface-storage-plugins (3.3.0)
  - Installing tabulate (0.9.0)
  - Installing tzdata (2024.2)
  - Installing yte (1.5.4)

Is there a way to do it via Conda? Or is there another easy fix? I see there are issues with this datrie package: https://github.com/astral-sh/uv/issues/7525, https://duckduckgo.com/?t=ffab&q=datrie+compilation+gcc+error&ia=web

cmeesters commented 2 weeks ago

Hoi Freek,

you ought to work in your workflow environment - otherwise, it will not pick the right Python version, will not find its dependencies and attempt to install them all, again. Just add poetry to your workflow environment, then run it within the cloned PR directory. Then it will attempt to install the plugin (or in your case it will overwrite the conda installation) in your conda env path.

freekvh commented 2 weeks ago

@cmeesters A new issue arose.

I have added poetry to my conda env (where I start snakemake from), git cloned this repo, left the branch at main (I see the PR branch has been merged), ran poetry install, the output of conda list | grep snake is now:

 conda list | grep snake
snakemake                 8.25.1                   pypi_0    pypi
snakemake-executor-plugin-slurm 0.11.1                   pypi_0    pypi
snakemake-executor-plugin-slurm-jobstep 0.2.1              pyhdfd78af_0    bioconda
snakemake-interface-common 1.17.4             pyhdfd78af_0    bioconda
snakemake-interface-executor-plugins 9.3.2              pyhdfd78af_0    bioconda
snakemake-interface-report-plugins 1.1.0              pyhdfd78af_0    bioconda
snakemake-interface-storage-plugins 3.3.0              pyhdfd78af_0    bioconda

Looking good right?

But jobs won't submit now, moreover, I had to explicitly specify my account, it didn't guess it by itself anymore.

$ snakemake --workflow-profile ./cluster_configs --software-deployment-method conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --verbose
Using workflow specific profile ./cluster_configs for setting default command line arguments.
host: int5
Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however important for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
shared_storage_local_copies: True
remote_exec: False
SLURM run ID: 4350f5ef-b8de-4882-8c54-e3e845d5a54f
Using shell: /usr/bin/bash
Provided remote nodes: 100
Job stats:
job                           count
--------------------------  -------
all                               1
complexity_20mer_counter          4
create_flagged_sampletable        1
create_pcs_raw_files              2
customqc_parameters               2
customqc_report                   1
fastqc                            4
qc_flagging                       2
rnaseq_multiqc                    1
salmon                            2
seqrun_expression_reports         1
tpm4_normalization                2
trim_galore                       2
total                            25

Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 9223372036854775807}
Ready jobs: 6
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/64c4bcc068674c34ae65073f2feef882-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/64c4bcc068674c34ae65073f2feef882-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 45 RHS
At line 49 BOUNDS
At line 56 ENDATA
Problem MODEL has 3 rows, 6 columns and 18 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 24 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 24 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                24.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.05
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.09   (Wallclock seconds):       0.00

Selected jobs: 6
Resources after job selection: {'_cores': 9223372036854775795, '_nodes': 94, '_job_count': 9223372036854775807}
Execute 6 jobs...

[Mon Nov  4 15:40:15 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    jobid: 16
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, slurm_account=eccdcdc, runtime=60, slurm_extra=-o cluster_outputs/%j.out -e cluster_outputs/%j.err, threads=1

        fastqc fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz --outdir=qc/fastqc

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers code params mtime software-env input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage software-deployment input-output storage-local-copies sources persistence source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZG93bmxvYWRfc3JhX2ZpbGVzOnRocmVhZHM9MQ== base64//ZG93bmxvYWRfc3JhX2ZpbGVzOnJ1bnRpbWU9MTIw base64//c3JhMmZhc3RxOnRocmVhZHM9Ng== base64//c3JhMmZhc3RxOnJ1bnRpbWU9MzYw base64//ZmFzdHFjOnRocmVhZHM9MQ== base64//dHJpbV9nYWxvcmU6dGhyZWFkcz00 base64//dHJpbV9nYWxvcmU6cnVudGltZT03MjA= base64//dHJpbV9nYWxvcmU6bWVtX21iPTIwMDAw base64//Y29tcGxleGl0eV8yMG1lcl9jb3VudGVyOm1lbV9tYj0xMDAwMA== base64//c2FsbW9uOnRocmVhZHM9MTY= base64//c2FsbW9uOm1lbV9tYj0yMDAwMA== base64//c2FsbW9uOnJ1bnRpbWU9NjAw base64//dHBtNF9ub3JtYWxpemF0aW9uOm1lbV9tYj0xMDAwMA== base64//dHBtNF9ub3JtYWxpemF0aW9uOnJ1bnRpbWU9MzA= base64//Y3VzdG9tcWNfcGFyYW1ldGVyczptZW1fbWI9MTAwMDA= base64//Y3VzdG9tcWNfcGFyYW1ldGVyczpydW50aW1lPTMw', '', '', '--default-resources base64//bWVtX21iPTEwMDA= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//c2x1cm1fYWNjb3VudD1lY2NkY2Rj base64//cnVudGltZT02MA== base64//c2x1cm1fZXh0cmE9Jy1vIGNsdXN0ZXJfb3V0cHV0cy8lai5vdXQgLWUgY2x1c3Rlcl9vdXRwdXRzLyVqLmVycic=', '']
sbatch call: sbatch --parsable --job-name 4350f5ef-b8de-4882-8c54-e3e845d5a54f --output '/gpfs/scratch1/shared/fvhemert/temp/tests/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R1/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S19R_S1_R1 -A 'eccdcdc' -p rome -t 60 --mem 1000 --ntasks=1 --cpus-per-task=1 -o cluster_outputs/%j.out -e cluster_outputs/%j.err -D /gpfs/scratch1/shared/fvhemert/temp/tests --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python -m snakemake --snakefile /gpfs/scratch1/shared/fvhemert/temp/tests/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' 'threads=1' --wait-for-files '/gpfs/scratch1/shared/fvhemert/temp/tests/.snakemake/tmp.hqgt7c3p' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/c9c3b7b81cc14baf5adafcae073b9cad_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers code params mtime software-env input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage software-deployment input-output storage-local-copies sources persistence source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZG93bmxvYWRfc3JhX2ZpbGVzOnRocmVhZHM9MQ== base64//ZG93bmxvYWRfc3JhX2ZpbGVzOnJ1bnRpbWU9MTIw base64//c3JhMmZhc3RxOnRocmVhZHM9Ng== base64//c3JhMmZhc3RxOnJ1bnRpbWU9MzYw base64//ZmFzdHFjOnRocmVhZHM9MQ== base64//dHJpbV9nYWxvcmU6dGhyZWFkcz00 base64//dHJpbV9nYWxvcmU6cnVudGltZT03MjA= base64//dHJpbV9nYWxvcmU6bWVtX21iPTIwMDAw base64//Y29tcGxleGl0eV8yMG1lcl9jb3VudGVyOm1lbV9tYj0xMDAwMA== base64//c2FsbW9uOnRocmVhZHM9MTY= base64//c2FsbW9uOm1lbV9tYj0yMDAwMA== base64//c2FsbW9uOnJ1bnRpbWU9NjAw base64//dHBtNF9ub3JtYWxpemF0aW9uOm1lbV9tYj0xMDAwMA== base64//dHBtNF9ub3JtYWxpemF0aW9uOnJ1bnRpbWU9MzA= base64//Y3VzdG9tcWNfcGFyYW1ldGVyczptZW1fbWI9MTAwMDA= base64//Y3VzdG9tcWNfcGFyYW1ldGVyczpydW50aW1lPTMw --default-resources base64//bWVtX21iPTEwMDA= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//c2x1cm1fYWNjb3VudD1lY2NkY2Rj base64//cnVudGltZT02MA== base64//c2x1cm1fZXh0cmE9Jy1vIGNsdXN0ZXJfb3V0cHV0cy8lai5vdXQgLWUgY2x1c3Rlcl9vdXRwdXRzLyVqLmVycic= --executor slurm-jobstep --jobs 1 --mode remote"
unlocking
removing lock
removing lock
removed all locks
Full Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/cli.py", line 2158, in args_to_api
    dag_api.execute_workflow(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/api.py", line 595, in execute_workflow
    workflow.execute(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1264, in execute
    raise e
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1260, in execute
    success = self.scheduler.schedule()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 319, in schedule
    self.run(runjobs)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 419, in run
    executor.run_jobs(jobs)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_interface_executor_plugins/executors/base.py", line 72, in run_jobs
    self.run_job(job)
  File "/gpfs/home5/fvhemert/projects/snakemake-executor-plugin-slurm/snakemake_executor_plugin_slurm/__init__.py", line 237, in run_job
    raise WorkflowError(
snakemake_interface_common.exceptions.WorkflowError: SLURM job submission failed. The error message was sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.

WorkflowError:
SLURM job submission failed. The error message was sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.

  File "/gpfs/home5/fvhemert/projects/snakemake-executor-plugin-slurm/snakemake_executor_plugin_slurm/__init__.py", line 237, in run_job

Did I do anything wrong?

cmeesters commented 2 weeks ago

The branch has been merged, but only after a review - not waiting for your input. And it has NOT been released. That is at least something. No, you did not do anything wrong. I did. I am terribly overworked and did not want the branch to be merged. The PR had the remark to wait for your input. Grompf, now, the plugin will fail, whenever there is something in stderr, which for you is guaranteed. Alas, I cannot turn this back.

I will attempt another hot-fix in our afternoon, for I will not find the time sooner. Een béétje geduld, a.u.b.. ;-)

fgvieira commented 2 weeks ago

Sorry, did not realize that you wanted to wait for feedback before merging. I assumed that, since we requested a review, that it was good to go!

freekvh commented 2 weeks ago

The branch has been merged, but only after a review - not waiting for your input. And it has NOT been released. That is at least something. No, you did not do anything wrong. I did. I am terribly overworked and did not want the branch to be merged. The PR had the remark to wait for your input. Grompf, now, the plugin will fail, whenever there is something in stderr, which for you is guaranteed. Alas, I cannot turn this back.

I will attempt another hot-fix in our afternoon, for I will not find the time sooner. Een béétje geduld, a.u.b.. ;-)

I feel no entitlement to your time and any effort is highly appreciated, please take it easy :)

cmeesters commented 2 weeks ago

@freekvh please pull the code from #165 and try again

@fgvieira never mind. It was my mistake to label as ready for review and with all the messages from the code rabbit, human remarks are easily lost.

freekvh commented 2 weeks ago

@freekvh please pull the code from #165 and try again

@fgvieira never mind. It was my mistake to label as ready for review and with all the messages from the code rabbit, human remarks are easily lost.

@cmeesters: It works! Thank you!!

cmeesters commented 2 weeks ago

Hoi Free,

thanks for all your feedback! The release has been published now(!). It will be available on Bioconda and PyPi in a few hours.

Cheers Christian