Closed leoisl closed 4 months ago
Another possibility is to add the --nolock
argument to all the snakemake runs. The argument tells snakemake not to lock the working directory, hence the directory locked error won't come up if you run after having killed a pipeline. I think not locking the working directory is only an issue if there are two snakemake workflows producing the same output files running at the same time. So the question is, to what extent do we want to rely on users not doing that.
I think it's OK to tell users not to run two at the same time in the same working dir. Is true if many tools
okay then, I've fixed this issue by adding --nolock
!
To
snakemake --unlock
a dir, which we need when the pipeline is preemptively killed, we need to provide a valid Snakefile, but all Snakefiles inpling
fails if we call them directly with--unlock
without a configfile, e.g.:We have similar error messages if we use the other Snakefiles
pling
defines.It does work however if you provide a configfile that was previously created with
pling
:This requires a bit of snakemake and pling internal files knowledge from the user though, this configfile is not always available, and pipelines are frequently preemptively killed. I am not sure how you want to solve this issue. It could be just a note in the README, or we could add a
--unlock
parameter torun_pling.py
to automatically handle this, or another approach, etc... But I think is worth providing a solution to this