Closed him1532 closed 11 months ago
I'm assuming this command you mentioned is the full command you are using to run the workflow, right?
snakemake --cluster qsub --jobs 1
This workflow (and I think most snakemake workflows out there) uses environment files defined via the conda:
directive to install the necessary software using the tool conda
or mamba
(that you most probably also used to install snakemake itself). However, you have to actively tell snakemake to --use-conda
, with this flag on the command-line. I guess the error message could very well be improved for this failure case...
I'm closing this issue, with the assumption that this is the problem. If it is something else, feel free to reopen this issue. Or open other issues if you come across further problems. Otherwise, happy analysis!
Thank you for the advice. I tried the following command this time. snakemake --useconda --job 1 --cluster "qsub -V -b y -S /bin/bash" and it's stepping through 29 steps of jobs. One question I had was what can I do to not wait for all the 29 steps to finish or accidentally close the terminal window? one idea was to but "&" symbol at the end of the commend. would this work? also I am not quite clear on how this pipeline is working. is there a way to save the jobscripts being submitted to the cluster? Thank you.
If you want to keep snakemake running, for example on a server, and be able to log out, you should for example start it in a screen session. Some useful links are here: https://koesterlab.github.io/data-science-for-bioinfo/servers/screen.html
Also, feel free to look around that knowledge base for further recommendations on snakemake and on bioinformatics more generally.
Also, as you are using a cluster system, you can probably increase the number of --jobs
that are submitted in parallel. Whenever multiple samples can be handled in parallel, snakemake will automatically do that for you.
And if you want to know more about what is actually run, you can look around the repository here. All the rules that are executed are in .smk
files in the workflow/rules/
directory, all scripts are in workflow/scripts/
. And if a rule uses a wrapper:
directive (for example the rule kallisto_quant:
), you can look that wrapper up in the snakemake wrapper repository / docs (for example the kallisto/quant
wrapper). These docs are versioned, they list all dependencies of a wrapper and the actual code that gets executed.
Hi, Thank you for building a nice tool and a pipeline. First time user of snakemake. I followed the document to install the snakemake.
my sample.tsv looks like
units.tsv look like
when I run snakemake --cluster qsub --jobs 1 following is the output.
Thank you.