pirovc / metameta

Other
23 stars 10 forks source link

Pipeline fails with "Exiting because a job execution failed. Look above for error message" #2

Closed spabinger closed 7 years ago

spabinger commented 7 years ago

Hi,

I could successfully start the pipeline but it's failing right now with the following error:

Creating conda environment for /home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/../envs/metametamerge.yaml...
Environment for /home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/../envs/metametamerge.yaml created.
Error in job metametamerge while creating output file sample_name_1/metametamerge/archaea_bacteria/final.metametamerge.profile.out.
RuleException:
CalledProcessError in line 31 of /home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/metametamerge.sm:
Command 'MetaMetaMerge.py --input-files sample_name_1/profiles/archaea_bacteria/kraken.profile.out sample_name_1/profiles/archaea_bacteria/clark.profile.out sample_name_1/profiles/archaea_bacteria/motus.profile.ou
t sample_name_1/profiles/archaea_bacteria/kaiju.profile.out sample_name_1/profiles/archaea_bacteria/gottcha.profile.out --database-profiles /home/stephan/work/mirnaseq/05_metameta/databases/archaea_bacteria
/kraken.dbprofile.out /home/stephan/work/mirnaseq/05_metameta/databases/archaea_bacteria/clark.dbprofile.out /home/stephan/work/mirnaseq/05_metameta/databases/archaea_bacteria/motus.dbprofile.out /ho
me/stephan/work/mirnaseq/05_metameta/databases/archaea_bacteria/kaiju.dbprofile.out /home/stephan/work/mirnaseq/05_metameta/databases/archaea_bacteria/gottcha.dbprofile.out --tool-identifier 'kraken,
clark,motus,kaiju,gottcha' --tool-method 'b,b,p,b,p' --names-file /home/stephan/work/mirnaseq/05_metameta/databases/names.dmp --nodes-file /home/stephan/work/mirnaseq/05_metameta/databases/nodes.dmp
--merged-file /home/stephan/work/mirnaseq/05_metameta/databases/merged.dmp --bins 3 --cutoff 0.0001 --mode 'linear' --ranks 'species' --output-file sample_name_1/metametamerge/archaea_bacteria/final.metamet
amerge.profile.out   --output-parsed-profiles > sample_name_1/log/archaea_bacteria/metametamerge.log 2>&1' returned non-zero exit status 1
  File "/home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/metametamerge.sm", line 31, in __rule_metametamerge
  File "/home/stephan/work/miniconda/data/envs/py35/lib/python3.5/concurrent/futures/thread.py", line 55, in run
Job failed, going on with independent jobs.
Exiting because a job execution failed. Look above for error message

Do you know how to fix it?

Thanks, Stephan

pirovc commented 7 years ago

Hi,

It looks like the pipeline was executed, but with an error on the last step (MetaMetaMerge). From this error message I cannot see much. Can you please paste the output for the file "sample_name_1/log/archaea_bacteria/metametamerge.log" (if the file is too big, just the end of the file with the error msg).

spabinger commented 7 years ago

Hi,

please find below the output of metametamerge.log.

Thanks, Stephan

- - - - - - - - - - - - - - - - - - - - -
           MetaMetaMerge 1.1
- - - - - - - - - - - - - - - - - - - - -
Input files: 
 kraken (b) sample_name_1/profiles/archaea_bacteria/kraken.profile.out /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/kraken.dbprofile.out
 clark (b) sample_name_1/profiles/archaea_bacteria/clark.profile.out /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/clark.dbprofile.out
 motus (p) sample_name_1/profiles/archaea_bacteria/motus.profile.out /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/motus.dbprofile.out
 kaiju (b) sample_name_1/profiles/archaea_bacteria/kaiju.profile.out /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/kaiju.dbprofile.out
 gottcha (p) sample_name_1/profiles/archaea_bacteria/gottcha.profile.out /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/gottcha.dbprofile.out
Taxonomy: 
 /home/stephan/work/epityp/mirnaseq/05_metameta/databases/names.dmp, /home/stephan/work/epityp/mirnaseq/05_metameta/databases/nodes.dmp, /home/stephan/work/epityp/mirnaseq/05_metameta/databases/merged.dmp
Bins: 3
Cutoff: 0.0001
Mode: linear
Ranks: species
Output file (type): sample_name_1/metametamerge/archaea_bacteria/final.metametamerge.profile.out (bioboxes)
Verbose: False
Detailed: False
- - - - - - - - - - - - - - - - - - - - -

Parsing taxonomy (names, nodes, merged) ... 

Reading database profiles ...
 - /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/kraken.dbprofile.out (tsv)
    species - 1529 entries (0 ignored)
    1 taxons with merged entries [94694]
    Total - 1528 taxons
 - /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/clark.dbprofile.out (tsv)
    species - 1461 entries (0 ignored)
    1 taxons with merged entries [94694]
    Total - 1460 taxons
 - /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/motus.dbprofile.out (tsv)
    species - 1815 entries (5 ignored)
    1 taxons with merged entries [1565]
    Total - 1814 taxons
 - /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/kaiju.dbprofile.out (tsv)
    species - 2419 entries (0 ignored)
    2 taxons with merged entries [94694,1334193]
    Total - 2417 taxons
 - /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/gottcha.dbprofile.out (tsv)
    species - 1471 entries (0 ignored)
    1 taxons with merged entries [94694]
    Total - 1470 taxons

Reading profiles ...
 - sample_name_1/profiles/archaea_bacteria/kraken.profile.out (tsv)
    1100480 lines (0 ignored)
    species - 864 entries
    Total - 864 taxons
 - sample_name_1/profiles/archaea_bacteria/clark.profile.out (tsv)
    497082 lines (0 ignored)
    species - 798 entries
    Total - 798 taxons
 - sample_name_1/profiles/archaea_bacteria/motus.profile.out (tsv)
    species - 2 entries (0 ignored)
    Total - 2 taxons
 - sample_name_1/profiles/archaea_bacteria/kaiju.profile.out (tsv)
Traceback (most recent call last):
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/metametamerge/parse_files.py", line 83, in parse_binning
    pandas_parsed = pd.read_csv(input_file, sep="\t", header=None, skiprows=header_count, names=('taxid','len'), usecols=[1,2], converters={'taxid': lambda txid: retrieveValidTaxID(int(txid))}, dtype={'len':int})
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/pandas/io/parsers.py", line 655, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/pandas/io/parsers.py", line 411, in _read
    data = parser.read(nrows)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/pandas/io/parsers.py", line 1005, in read
    ret = self._engine.read(nrows)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/pandas/io/parsers.py", line 1748, in read
    data = self._reader.read(nrows)
  File "pandas/_libs/parsers.pyx", line 890, in pandas._libs.parsers.TextReader.read (pandas/_libs/parsers.c:10862)
  File "pandas/_libs/parsers.pyx", line 912, in pandas._libs.parsers.TextReader._read_low_memory (pandas/_libs/parsers.c:11138)
  File "pandas/_libs/parsers.pyx", line 989, in pandas._libs.parsers.TextReader._read_rows (pandas/_libs/parsers.c:12175)
  File "pandas/_libs/parsers.pyx", line 1112, in pandas._libs.parsers.TextReader._convert_column_data (pandas/_libs/parsers.c:14103)
  File "pandas/_libs/parsers.pyx", line 2279, in pandas._libs.parsers._apply_converter (pandas/_libs/parsers.c:30638)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/metametamerge/parse_files.py", line 83, in <lambda>
    pandas_parsed = pd.read_csv(input_file, sep="\t", header=None, skiprows=header_count, names=('taxid','len'), usecols=[1,2], converters={'taxid': lambda txid: retrieveValidTaxID(int(txid))}, dtype={'len':int})
ValueError: invalid literal for int() with base 10: '1:N:0:GTAGAG'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/bin/MetaMetaMerge.py", line 338, in <module>
    main()
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/bin/MetaMetaMerge.py", line 131, in main
    tool = Tools(input_file, identifiers[idx], methods[idx], parse_files(input_file, methods[idx], all_names_scientific, all_names_other, nodes, merged, ranks, args.verbose), ranks, args.verbose)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/metametamerge/parse_files.py", line 157, in parse_files
    binning_result, binning_count = parse_binning(input_file)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/1698337c/lib/python3.6/site-packages/metametamerge/parse_files.py", line 97, in parse_binning
    taxid = int(fields[1])
ValueError: invalid literal for int() with base 10: '1:N:0:GTAGAG'
pirovc commented 7 years ago

Thanks!

There's an error while parsing kaiju output file. We experienced some problems with kaiju, especially when dealing with reads of different length (it didn't execute correctly, exit with segmentation fault). Can you also send the kaiju log files (sample_name_1/log/archaea_bacteria/kaiju_run_1.log and sample_name_1/log/archaea_bacteria/kaiju_rpt.log) so we can see if the error is generated by the tool or the pipeline parsing.

Vitor

spabinger commented 7 years ago

Hi,

logs:

Stephan

pirovc commented 7 years ago

Hi Stephan,

It looks like kaiju executed just fine. There is something unexpected while parsing the output file. A temporary solution is to remove kaiju from the configuration file and execute metameta without it for now, until I found the soltution.

It would be very helpfull if you could also send me the head and tail for the kaiju output file, so I can debug it:

head sample_name_1/kaiju/archaea_bacteria/sample_name_1.results.out tail sample_name_1/kaiju/archaea_bacteria/sample_name_1.results.out

Vitor

spabinger commented 7 years ago

Hi,

alright, I'll start it without Kaiju.

Content:

pirovc commented 7 years ago

Hi Stephan,

Thank you for helping me out with the information to debug. I found out that there was a parsing problem on the kaiju output due to a space on your file headers.

To quick fix it please replace the /home/stephan/work/miniconda/data/envs/py35/opt/metameta/kaiju.sm file for this one: kaiju.sm (let me know if you try this out!)

I still need to do some tests and later this week a new version will be online (v1.1.1) and it will be possible to update MetaMeta with the command: conda update metameta

Vitor

spabinger commented 7 years ago

Hi,

thanks for the update. I exchanged this file (tools sub-folder was missing) /home/stephan/work/miniconda/data/envs/py35/opt/metameta/tools/kaiju.sm

Will post an update with results.

Best, Stephan

spabinger commented 7 years ago

Hi,

the pipeline unfortunately failed again:

An error has occured.
Please check the main log file for more information:
        /home/stephan/work/epityp/mirnaseq/05_metameta/outputmetameta_2017-09-05_17-18-55.log
Detailed output and execution time for each rule can be found at:
        /home/stephan/work/epityp/mirnaseq/05_metameta/databaseslog/
        /home/stephan/work/epityp/mirnaseq/05_metameta/outputSAMPLE_NAME/log/
Touching output file sample_name_1/clean_reads.done.
20 of 22 steps (91%) done
Creating conda environment for /home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/../envs/krona.yaml...
Environment for /home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/../envs/krona.yaml created.
Error in job krona while creating output file sample_name_1/metametamerge/archaea_bacteria/final.metametamerge.profile.html.
RuleException:
NameError in line 8 of /home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/krona.sm:
The name 'krona' is unknown in this context. Please make sure that you defined that variable. Also note that braces not used for variable access have to be escaped by repeating them, i.e. {
{print $1}}
  File "/home/stephan/work/miniconda/data/envs/py35/opt/metameta/scripts/krona.sm", line 8, in __rule_krona
  File "/home/stephan/work/miniconda/data/envs/py35/lib/python3.5/concurrent/futures/thread.py", line 55, in run
Job failed, going on with independent jobs.
Exiting because a job execution failed. Look above for error message

Best, Stephan

pirovc commented 7 years ago

Hi Stephan,

Looks like the kaiju parsing worked out. The pipeline is finished, at least the main results, which you can find under your working directory (final.metametamerge.profile.out). The error is happening when MetaMeta tries to generate the Krona plots for visualization. Can you send me the krona.log file?

Thanks, Vitor

spabinger commented 7 years ago

Hi,

I didn't find the krona.log file - could you give me a pointer?

Thanks, Stephan

pirovc commented 7 years ago

Sure! Based on your log files it's supposed to be at: /home/stephan/work/epityp/mirnaseq/05_metameta/output/sample_name_1/log/archaea_bacteria/krona.log

spabinger commented 7 years ago

Hi,

unfortunately there is no krona.log file in this directory. Do I need to specify something in the yaml configuration?

Current:

# Output directory
workdir: "/home/stephan/work/epityp/mirnaseq/05_metameta/output/" 

# Database output directory (Tip: create this folder in a common directory so it could be used for other runs as well as other users)
dbdir: "/home/stephan/work/epityp/mirnaseq/05_metameta/databases/"

# Sample (name and files)
samples:
  "sample_name_1":
    fq1: "/home/stephan/work/epityp/mirnaseq/05_metameta/input/not_mapped.fastq.gz"
## Add more samples here

################################################################

# Configured tools (p=profiling, b=binning) from tools folder (tool.sm and tool_db.sm)
tools:
    "clark": "b"
    "dudes": "p"
    "gottcha": "p"
    "kaiju": "b"
    "kraken": "b"
    "motus": "p"

### MetaMeta Pipeline ### 
# Number of threads for each tool (distributed among the number of cores defined by main parameter --cores)
threads: 12
# Gzipped input files (0: not gzipped / 1: gzipped). Default: 0
gzipped: 1
# Keep intermediate files (database, reads and output) (0: do not keep files / 1: keep all files). Default: 0
keepfiles: 1 ### TODO change to 0

### Trimming ### 
# Trimmomatic (0: off / 1: on). Default: 0
trimming: 0
# Trimmomatic parameter desiredminlen (minimun desired length). Default: 70
desiredminlen: 70
# Trimmomatic parameter strictness (range from 0-1 -> 0: not strict / 1: very scrict). Default: 0.8
strictness: 0.8

### Error correction ###
# BayesHammer (0: off / 1: on). Default: 0
errorcorr: 0

### Sub-sampling ###
# Sub-sampling (0: off / 1: on). Default: 0
subsample: 0
# Sub-sampe size (Integer: specific read number / Float: percentage of reads / 1: equally divide the whole set among the tools). Default: 1
samplesize: 1
# Sub-sample with replacement (0: without replacement / 1: with replacement). Default: 0
replacement: 0

### MetaMetaMerge ###
# MetaMetaMerge mode (precise, very-precise, linear, sensitive, very-sensitive). Default: linear
mode: 'linear'
# MetaMetaMerge minimum abundance or Maximum results for each taxonomic level (0: off / 0-1: minimum relative abundance / >=1: maximum number of identifications). Default: 0.0001
cutoff: 0.0001
# MetaMetaMerge number of bins. Default: 4
bins: 3
# MetaMetaMerge reversed output. Generates the merged output based only on species identifications and estimate upper levels (0: off / 1: on). Default: 1
reversed: 1
# MetaMetaMerge detailed output. Generates detailed output for each taxon/tool and its normalized relative abundance estimation. It outputs 0 for tools which did not identified the taxon and -1 for tools without a taxon in the database. (0: off / 1: on). Default: 0
detailed: 0

### MetaMeta paths ###
# Alternative path for the databases (uses it instead of the default path)
# Warning: when keepfiles=0 metameta will delete all files inside the database folder but the ones necessary for the tools to run.
db_alt_path:
    "clark": ""
    "dudes": ""
    "gottcha": ""
    "kaiju": ""
    "kraken": ""
    "motus": ""

# Alternative path for the tools (uses it instead of the default path)
tool_alt_path:
    "clark": ""
    "dudes": ""
    "gottcha": ""
    "kaiju": ""
    "kraken": ""
    "motus": ""
    "trimmomatic": ""
    "spades": ""
    "bowtie2": ""
    "metametamerge": ""
pirovc commented 7 years ago

Hi Stephan,

Now I see the error. You're are using a slightly older version of the configuration file (from the version 1.0) and "krona" is not defined there. Since you are not using alternative paths for any tool I would recommend to delete all those empty assignments. There were also some changes to MetaMetaMerge parameters (updated complete configuration file). By ommiting some configurations, MetaMeta will use the default values. If you use the example below it should work fine:

workdir: "/home/stephan/work/epityp/mirnaseq/05_metameta/output/" 

# Database output directory (Tip: create this folder in a common directory so it could be used for other runs as well as other users)
dbdir: "/home/stephan/work/epityp/mirnaseq/05_metameta/databases/"

# Sample (name and files)
samples:
  "sample_name_1":
    fq1: "/home/stephan/work/epityp/mirnaseq/05_metameta/input/not_mapped.fastq.gz"
## Add more samples here

################################################################

# Configured tools (p=profiling, b=binning) from tools folder (tool.sm and tool_db.sm)
tools:
    "clark": "b"
    "dudes": "p"
    "gottcha": "p"
    "kaiju": "b"
    "kraken": "b"
    "motus": "p"

### MetaMeta Pipeline ### 
# Number of threads for each tool (distributed among the number of cores defined by main parameter --cores)
threads: 12
# Gzipped input files (0: not gzipped / 1: gzipped). Default: 0
gzipped: 1
# Keep intermediate files (database, reads and output) (0: do not keep files / 1: keep all files). Default: 0
keepfiles: 1 ### TODO change to 0
spabinger commented 7 years ago

Hi,

thanks for the yaml file. This time the pipeline failed with this message:

Creating conda environment for /home/stephan/work/miniconda/data/envs/py35/opt/metameta/tools/../envs/dudes.yaml...
Environment for /home/stephan/work/miniconda/data/envs/py35/opt/metameta/tools/../envs/dudes.yaml created.
4 of 26 steps (15%) done
rule dudes_run_2:
    input: sample_name_1/dudes/archaea_bacteria/sample_name_1.sam
    output: sample_name_1/dudes/archaea_bacteria/sample_name_1.results.out
    log: sample_name_1/log/archaea_bacteria/dudes_run_2.log
    benchmark: sample_name_1/log/archaea_bacteria/dudes_run_2.time
    wildcards: database=archaea_bacteria, sample=sample_name_1
    threads: 12

Error in job dudes_run_2 while creating output file sample_name_1/dudes/archaea_bacteria/sample_name_1.results.out.
RuleException:
CalledProcessError in line 20 of /home/stephan/work/miniconda/data/envs/py35/opt/metameta/tools/dudes.sm:
Command 'DUDes.py -s sample_name_1/dudes/archaea_bacteria/sample_name_1.sam -d /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/dudes_db/dudes_db.npz -t 12 -a 0.00001 -o sample_name_1/dudes/arch
aea_bacteria/sample_name_1.results > sample_name_1/log/archaea_bacteria/dudes_run_2.log 2>&1' returned non-zero exit status 1
  File "/home/stephan/work/miniconda/data/envs/py35/opt/metameta/tools/dudes.sm", line 20, in __rule_dudes_run_2
  File "/home/stephan/work/miniconda/data/envs/py35/lib/python3.5/concurrent/futures/thread.py", line 55, in run
Job failed, going on with independent jobs.
rule kraken_run_1:
    input: /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/kraken_db_check.done, sample_name_1/reads/kraken.1.fq
    output: sample_name_1/kraken/archaea_bacteria/sample_name_1.results.out
    log: sample_name_1/log/archaea_bacteria/kraken_run_1.log
    benchmark: sample_name_1/log/archaea_bacteria/kraken_run_1.time
    wildcards: database=archaea_bacteria, sample=sample_name_1
    threads: 12
- - - - - - - - - - - - - - - - - - - - -
|               DUDes v0.07             |
- - - - - - - - - - - - - - - - - - - - -
Output prefix = sample_name_1/dudes/archaea_bacteria/sample_name_1.results
SAM (format) = sample_name_1/dudes/archaea_bacteria/sample_name_1.sam (nm)
Database = /home/stephan/work/epityp/mirnaseq/05_metameta/databases/archaea_bacteria/dudes_db/dudes_db.npz
Threads = 12
TaxID Start/Last rank = 1/species
Max. Read Matches = 0.000000
Min. Ref. Matches = 0.000010
Bin size = 0.250000
- - - - - - - - - - - - - - - - - - - - -
Traceback (most recent call last):
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/bin/DUDes.py", line 681, in <module>
    main()
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/bin/DUDes.py", line 101, in main
    pool = mp.Pool(threads)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/context.py", line 119, in Pool
    context=self.get_context())
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/pool.py", line 156, in __init__
    self._setup_queues()
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/pool.py", line 249, in _setup_queues
    self._inqueue = self._ctx.SimpleQueue()
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/context.py", line 112, in SimpleQueue
    return SimpleQueue(ctx=self.get_context())
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/queues.py", line 322, in __init__
    self._rlock = ctx.Lock()
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/context.py", line 67, in Lock
    return Lock(ctx=self.get_context())
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/synchronize.py", line 163, in __init__
    SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
  File "/mnt/vdb1/stephan/epityp/mirnaseq/05_metameta/output/.snakemake/conda/f3a7047b/lib/python3.6/multiprocessing/synchronize.py", line 60, in __init__
    unlink_now)
PermissionError: [Errno 13] Permission denied
pirovc commented 7 years ago

Hi Stephan,

Looks like it's a system related problem. I never had it before, but I found out it's related to access to the shared memory in your server (python needs that to do multiprocessing). Here it a suggestions for a solution, I hope it can be helpful: https://stackoverflow.com/questions/2009278/python-multiprocessing-permission-denied.

Best, Vitor

spabinger commented 7 years ago

Thanks for your help - without DUDes the pipeline is running through. I'm going to try to make the changes to my environment.

Best, Stephan