Closed gxydevbot closed 5 months ago
Test State | Count |
---|---|
Total | 1 |
Passed | 0 |
Error | 0 |
Failure | 1 |
Skipped | 0 |
- **Step 12: Bigwig from MACS2**:
* step_state: scheduled
* <details><summary>Jobs</summary>
- **Job 1:**
* Job state is ok
**Command Line:**
* ```console
grep -v "^track" '/tmp/tmp5ncus5ir/files/d/0/f/dataset_d0f56cb2-4ffd-42f8-a40b-afef42b0f9b5.dat' | wigToBigWig stdin '/cvmfs/data.galaxyproject.org/managed/len/ucsc/mm10.len' '/tmp/tmp5ncus5ir/job_working_directory/000/8/outputs/dataset_b46a9353-d8de-48ba-b195-87b2c275c444.dat' -clip 2>&1 || echo "Error running wigToBigWig." >&2
```
**Exit Code:**
* ```console
0
```
**Traceback:**
* ```console
```
**Job Parameters:**
* | Job parameter | Parameter value |
| ------------- | --------------- |
| \_\_input\_ext | ` "bedgraph" ` |
| \_\_workflow\_invocation\_uuid\_\_ | ` "acba36a01c2d11ef8884f526e62285ed" ` |
| chromInfo | ` "/cvmfs/data.galaxyproject.org/managed/len/ucsc/mm10.len" ` |
| dbkey | ` "mm10" ` |
| settings | ` {"__current_case__": 0, "settingsType": "preset"} ` |
</details>
- **Step 13: MultiQC**:
* step_state: scheduled
* <details><summary>Jobs</summary>
- **Job 1:**
* Job state is ok
**Command Line:**
* ```console
die() { echo "$@" 1>&2 ; exit 1; } && mkdir multiqc_WDir && mkdir multiqc_WDir/cutadapt_0 && ln -s '/tmp/tmp5ncus5ir/files/5/6/1/dataset_5619e26e-0a61-4960-a930-ab6ff1b065c6.dat' 'multiqc_WDir/cutadapt_0/wt_H3K4me3.txt' && sed -i.old 's/You are running/This is/' 'multiqc_WDir/cutadapt_0/wt_H3K4me3.txt' && grep -q "This is cutadapt" 'multiqc_WDir/cutadapt_0/wt_H3K4me3.txt' || die "'This is cutadapt' or 'You are running cutadapt' not found in the file" && mkdir multiqc_WDir/bowtie2_1 && grep -q '% overall alignment rate' /tmp/tmp5ncus5ir/files/b/5/c/dataset_b5c0229c-8e30-475e-bffc-af06313a1553.dat || die "Module 'bowtie2: '% overall alignment rate' not found in the file 'wt_H3K4me3'" && ln -s '/tmp/tmp5ncus5ir/files/b/5/c/dataset_b5c0229c-8e30-475e-bffc-af06313a1553.dat' 'multiqc_WDir/bowtie2_1/wt_H3K4me3' && mkdir multiqc_WDir/macs2_2 && grep -q "# This file is generated by MACS" /tmp/tmp5ncus5ir/files/d/7/4/dataset_d74d5d89-e14a-4bca-8ebc-6eb1974075f4.dat || die "'# This file is generated by MACS' not found in the file" && ln -s '/tmp/tmp5ncus5ir/files/d/7/4/dataset_d74d5d89-e14a-4bca-8ebc-6eb1974075f4.dat' 'multiqc_WDir/macs2_2/wt_H3K4me3_peaks.xls' && multiqc multiqc_WDir --filename 'report' --export
```
**Exit Code:**
* ```console
0
```
**Standard Error:**
* ```console
/// MultiQC 🔍 | v1.11
| multiqc | MultiQC Version v1.22.1 now available!
| multiqc | Search path : /tmp/tmp5ncus5ir/job_working_directory/000/9/working/multiqc_WDir
| macs2 | Found 1 logs
| bowtie2 | Found 1 reports
| cutadapt | Found 1 reports
| multiqc | Compressing plot data
| multiqc | Report : report.html
| multiqc | Data : report_data
| multiqc | Plots : report_plots
| multiqc | MultiQC complete
```
**Standard Output:**
* ```console
| searching | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 4/4
```
**Traceback:**
* ```console
```
**Job Parameters:**
* | Job parameter | Parameter value |
| ------------- | --------------- |
| \_\_input\_ext | ` "input" ` |
| \_\_workflow\_invocation\_uuid\_\_ | ` "acba36a01c2d11ef8884f526e62285ed" ` |
| chromInfo | ` "/cvmfs/data.galaxyproject.org/managed/len/ucsc/mm10.len" ` |
| comment | ` "" ` |
| dbkey | ` "mm10" ` |
| export | ` true ` |
| flat | ` false ` |
| results | ` [{"__index__": 0, "software_cond": {"__current_case__": 5, "input": {"values": [{"id": 3, "src": "hdca"}]}, "software": "cutadapt"}}, {"__index__": 1, "software_cond": {"__current_case__": 3, "input": {"values": [{"id": 5, "src": "hdca"}]}, "software": "bowtie2"}}, {"__index__": 2, "software_cond": {"__current_case__": 16, "input": {"values": [{"id": 7, "src": "hdca"}]}, "software": "macs2"}}] ` |
| saveLog | ` false ` |
| title | ` "" ` |
</details>
- **Step 3: adapter_reverse**:
* step_state: scheduled
- **Step 4: reference_genome**:
* step_state: scheduled
- **Step 5: effective_genome_size**:
* step_state: scheduled
- **Step 6: normalize_profile**:
* step_state: scheduled
- **Step 7: Cutadapt (remove adapter + bad quality bases)**:
* step_state: scheduled
* <details><summary>Jobs</summary>
- **Job 1:**
* Job state is ok
**Command Line:**
* ```console
ln -f -s '/tmp/tmp5ncus5ir/files/a/0/3/dataset_a0392d8a-a05c-4f1f-baa6-8dbb7c1e152a.dat' 'wt_H3K4me3_1.fq' && ln -f -s '/tmp/tmp5ncus5ir/files/a/9/1/dataset_a91f18dc-147b-4221-8864-50d755657467.dat' 'wt_H3K4me3_2.fq' && cutadapt -j=${GALAXY_SLOTS:-4} -a 'Please use: For R1: - For Nextera: CTGTCTCTTATACACATCTCCGAGCCCACGAGAC - For TrueSeq: GATCGGAAGAGCACACGTCTGAACTCCAGTCAC or AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC '='GATCGGAAGAGCACACGTCTGAACTCCAGTCAC' -A 'Please use: For R2: - For Nextera: CTGTCTCTTATACACATCTGACGCTGCCGACGA - For TruSeq: GATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT or AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT'='GATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT' --error-rate=0.1 --times=1 --overlap=3 --action=trim --minimum-length=15 -o 'out1.fq' -p 'out2.fq' 'wt_H3K4me3_1.fq' 'wt_H3K4me3_2.fq' > report.txt
```
**Exit Code:**
* ```console
0
```
**Traceback:**
* ```console
```
**Job Parameters:**
* | Job parameter | Parameter value |
| ------------- | --------------- |
| \_\_input\_ext | ` "input" ` |
| \_\_workflow\_invocation\_uuid\_\_ | ` "acba36a01c2d11ef8884f526e62285ed" ` |
| adapter\_options | ` {"action": "trim", "error_rate": "0.1", "match_read_wildcards": false, "no_indels": false, "no_match_adapter_wildcards": true, "overlap": "3", "revcomp": false, "times": "1"} ` |
| chromInfo | ` "/tmp/tmp5ncus5ir/galaxy-dev/tool-data/shared/ucsc/chrom/?.len" ` |
| dbkey | ` "?" ` |
| filter\_options | ` {"discard_casava": false, "discard_trimmed": false, "discard_untrimmed": false, "max_average_error_rate": null, "max_expected_errors": null, "max_n": null, "maximum_length": null, "maximum_length2": null, "minimum_length": "15", "minimum_length2": null, "pair_filter": "any"} ` |
| library | ` {"__current_case__": 2, "input_1": {"values": [{"id": 1, "src": "dce"}]}, "pair_adapters": false, "r1": {"adapters": [{"__index__": 0, "adapter_source": {"__current_case__": 0, "adapter": "GATCGGAAGAGCACACGTCTGAACTCCAGTCAC", "adapter_name": "Please use: For R1: - For Nextera: CTGTCTCTTATACACATCTCCGAGCCCACGAGAC - For TrueSeq: GATCGGAAGAGCACACGTCTGAACTCCAGTCAC or AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC ", "adapter_source_list": "user"}, "single_noindels": false}], "anywhere_adapters": [], "front_adapters": []}, "r2": {"adapters2": [{"__index__": 0, "adapter_source": {"__current_case__": 0, "adapter": "GATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT", "adapter_name": "Please use: For R2: - For Nextera: CTGTCTCTTATACACATCTGACGCTGCCGACGA - For TruSeq: GATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT or AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT", "adapter_source_list": "user"}, "single_noindels": false}], "anywhere_adapters2": [], "front_adapters2": []}, "type": "paired_collection"} ` |
| other\_trimming\_options | ` {"cut": "0", "cut2": "0", "nextseq_trim": "0", "poly_a": false, "quality_cutoff": "0", "quality_cutoff2": "", "shorten_options": {"__current_case__": 1, "shorten_values": "False"}, "trim_n": false} ` |
| output\_selector | ` ["report"] ` |
| read\_mod\_options | ` {"length_tag": "", "rename": "", "strip_suffix": "", "zero_cap": false} ` |
</details>
- **Step 8: Bowtie2 map on reference**:
* step_state: scheduled
* <details><summary>Jobs</summary>
- **Job 1:**
* Job state is ok
**Command Line:**
* ```console
set -o | grep -q pipefail && set -o pipefail; ln -f -s '/tmp/tmp5ncus5ir/files/6/0/9/dataset_6097e6a0-3c63-4bef-a922-987cdd9b2fb6.dat' input_f.fastq && ln -f -s '/tmp/tmp5ncus5ir/files/1/d/3/dataset_1d3940b0-ca91-42fd-b149-b712175dfa49.dat' input_r.fastq && THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi && bowtie2 -p "$THREADS" -x '/cvmfs/data.galaxyproject.org/byhand/mm10/bowtie2_index/mm10' -1 'input_f.fastq' -2 'input_r.fastq' 2> >(tee '/tmp/tmp5ncus5ir/job_working_directory/000/4/outputs/dataset_b5c0229c-8e30-475e-bffc-af06313a1553.dat' >&2) | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmp5ncus5ir/job_working_directory/000/4/outputs/dataset_1a90fd27-9cc7-455a-ab3b-3b2a18e23cdc.dat'
```
**Exit Code:**
* ```console
0
```
**Standard Error:**
* ```console
50000 reads; of these:
50000 (100.00%) were paired; of these:
1879 (3.76%) aligned concordantly 0 times
42760 (85.52%) aligned concordantly exactly 1 time
5361 (10.72%) aligned concordantly >1 times
----
1879 pairs aligned concordantly 0 times; of these:
276 (14.69%) aligned discordantly 1 time
----
1603 pairs aligned 0 times concordantly or discordantly; of these:
3206 mates make up the pairs; of these:
1879 (58.61%) aligned 0 times
947 (29.54%) aligned exactly 1 time
380 (11.85%) aligned >1 times
98.12% overall alignment rate
```
**Traceback:**
* ```console
```
**Job Parameters:**
* | Job parameter | Parameter value |
| ------------- | --------------- |
| \_\_input\_ext | ` "input" ` |
| \_\_job\_resource | ` {"__current_case__": 0, "__job_resource__select": "no"} ` |
| \_\_workflow\_invocation\_uuid\_\_ | ` "acba36a01c2d11ef8884f526e62285ed" ` |
| analysis\_type | ` {"__current_case__": 0, "analysis_type_selector": "simple", "presets": "no_presets"} ` |
| chromInfo | ` "/tmp/tmp5ncus5ir/galaxy-dev/tool-data/shared/ucsc/chrom/?.len" ` |
| dbkey | ` "?" ` |
| library | ` {"__current_case__": 2, "aligned_file": false, "input_1": {"values": [{"id": 4, "src": "dce"}]}, "paired_options": {"__current_case__": 1, "paired_options_selector": "no"}, "type": "paired_collection", "unaligned_file": false} ` |
| reference\_genome | ` {"__current_case__": 0, "index": "mm10", "source": "indexed"} ` |
| rg | ` {"__current_case__": 3, "rg_selector": "do_not_set"} ` |
| sam\_options | ` {"__current_case__": 1, "sam_options_selector": "no"} ` |
| save\_mapping\_stats | ` true ` |
</details>
- **Step 9: filter MAPQ30 concordent pairs**:
* step_state: scheduled
* <details><summary>Jobs</summary>
- **Job 1:**
* Job state is ok
**Command Line:**
* ```console
ln -s '/tmp/tmp5ncus5ir/files/1/a/9/dataset_1a90fd27-9cc7-455a-ab3b-3b2a18e23cdc.dat' input.bam && ln -s '/tmp/tmp5ncus5ir/files/_metadata_files/8/4/2/metadata_842cec2f-3960-4a1a-b165-79790d495ccf.dat' input.bai && samtools view -o '/tmp/tmp5ncus5ir/job_working_directory/000/5/outputs/dataset_da162fa6-cb07-436b-a335-a232085b2cd7.dat' -h -b -q 30 -f 0x2 input.bam
```
**Exit Code:**
* ```console
0
```
**Traceback:**
* ```console
```
**Job Parameters:**
* | Job parameter | Parameter value |
| ------------- | --------------- |
| \_\_input\_ext | ` "bam" ` |
| \_\_workflow\_invocation\_uuid\_\_ | ` "acba36a01c2d11ef8884f526e62285ed" ` |
| bed\_file | ` None ` |
| chromInfo | ` "/cvmfs/data.galaxyproject.org/managed/len/ucsc/mm10.len" ` |
| dbkey | ` "mm10" ` |
| flag | ` {"__current_case__": 1, "filter": "yes", "reqBits": ["0x0002"], "skipBits": null} ` |
| header | ` "-h" ` |
| library | ` "" ` |
| mapq | ` "30" ` |
| outputtype | ` "bam" ` |
| possibly\_select\_inverse | ` false ` |
| read\_group | ` "" ` |
| regions | ` "" ` |
</details>
- **Step 10: Call Peaks with MACS2**:
* step_state: scheduled
* <details><summary>Jobs</summary>
- **Job 1:**
* Job state is ok
**Command Line:**
* ```console
export PYTHON_EGG_CACHE=`pwd` && (macs2 callpeak -t '/tmp/tmp5ncus5ir/files/d/a/1/dataset_da162fa6-cb07-436b-a335-a232085b2cd7.dat' --name wt_H3K4me3 --format BAMPE --gsize '1870000000' --SPMR --call-summits --keep-dup '1' --d-min 20 --buffer-size 100000 --bdg --qvalue '0.05' --mfold '5' '50' --bw '300' 2>&1 > macs2_stderr) && cp wt_H3K4me3_peaks.xls '/tmp/tmp5ncus5ir/job_working_directory/000/6/outputs/dataset_d74d5d89-e14a-4bca-8ebc-6eb1974075f4.dat' && ( count=`ls -1 wt_H3K4me3* 2>/dev/null | wc -l`; if [ $count != 0 ]; then mkdir '/tmp/tmp5ncus5ir/job_working_directory/000/6/outputs/dataset_1d995ded-4990-46de-99f2-03a0defc0fb1_files' && cp -r wt_H3K4me3* '/tmp/tmp5ncus5ir/job_working_directory/000/6/outputs/dataset_1d995ded-4990-46de-99f2-03a0defc0fb1_files' && python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/iuc/macs2/86e2413cf3f8/macs2/dir2html.py' '/tmp/tmp5ncus5ir/job_working_directory/000/6/outputs/dataset_1d995ded-4990-46de-99f2-03a0defc0fb1_files' macs2_stderr > '/tmp/tmp5ncus5ir/job_working_directory/000/6/outputs/dataset_1d995ded-4990-46de-99f2-03a0defc0fb1.dat'; fi; ) && exit_code_for_galaxy=$? && cat macs2_stderr 2>&1 && (exit $exit_code_for_galaxy)
```
**Exit Code:**
* ```console
0
```
**Standard Output:**
* ```console
INFO @ Mon, 27 May 2024 13:37:27:
# Command line: callpeak -t /tmp/tmp5ncus5ir/files/d/a/1/dataset_da162fa6-cb07-436b-a335-a232085b2cd7.dat --name wt_H3K4me3 --format BAMPE --gsize 1870000000 --SPMR --call-summits --keep-dup 1 --d-min 20 --buffer-size 100000 --bdg --qvalue 0.05 --mfold 5 50 --bw 300
# ARGUMENTS LIST:
# name = wt_H3K4me3
# format = BAMPE
# ChIP-seq file = ['/tmp/tmp5ncus5ir/files/d/a/1/dataset_da162fa6-cb07-436b-a335-a232085b2cd7.dat']
# control file = None
# effective genome size = 1.87e+09
# band width = 300
# model fold = [5, 50]
# qvalue cutoff = 5.00e-02
# The maximum gap between significant sites is assigned as the read length/tag size.
# The minimum length of peaks is assigned as the predicted fragment length "d".
# Larger dataset will be scaled towards smaller dataset.
# Range for calculating regional lambda is: 10000 bps
# Broad region calling is off
# Paired-End mode is on
# Searching for subpeak summits is on
# MACS will save fragment pileup signal per million reads
INFO @ Mon, 27 May 2024 13:37:27: #1 read fragment files...
INFO @ Mon, 27 May 2024 13:37:27: #1 read treatment fragments...
INFO @ Mon, 27 May 2024 13:37:27: 44382 fragments have been read.
INFO @ Mon, 27 May 2024 13:37:27: #1 mean fragment size is determined as 203.1 bp from treatment
INFO @ Mon, 27 May 2024 13:37:27: #1 fragment size = 203.1
INFO @ Mon, 27 May 2024 13:37:27: #1 total fragments in treatment: 44382
INFO @ Mon, 27 May 2024 13:37:27: #1 user defined the maximum fragments...
INFO @ Mon, 27 May 2024 13:37:27: #1 filter out redundant fragments by allowing at most 1 identical fragment(s)
INFO @ Mon, 27 May 2024 13:37:27: #1 fragments after filtering in treatment: 44382
INFO @ Mon, 27 May 2024 13:37:27: #1 Redundant rate of treatment: 0.00
INFO @ Mon, 27 May 2024 13:37:27: #1 finished!
INFO @ Mon, 27 May 2024 13:37:27: #2 Build Peak Model...
INFO @ Mon, 27 May 2024 13:37:27: #2 Skipped...
INFO @ Mon, 27 May 2024 13:37:27: #3 Call peaks...
INFO @ Mon, 27 May 2024 13:37:27: #3 Going to call summits inside each peak ...
INFO @ Mon, 27 May 2024 13:37:27: #3 Pre-compute pvalue-qvalue table...
INFO @ Mon, 27 May 2024 13:37:27: #3 In the peak calling step, the following will be performed simultaneously:
INFO @ Mon, 27 May 2024 13:37:27: #3 Write bedGraph files for treatment pileup (after scaling if necessary)... wt_H3K4me3_treat_pileup.bdg
INFO @ Mon, 27 May 2024 13:37:27: #3 Write bedGraph files for control lambda (after scaling if necessary)... wt_H3K4me3_control_lambda.bdg
INFO @ Mon, 27 May 2024 13:37:27: #3 --SPMR is requested, so pileup will be normalized by sequencing depth in million reads.
INFO @ Mon, 27 May 2024 13:37:27: #3 Call peaks for each chromosome...
INFO @ Mon, 27 May 2024 13:37:27: #4 Write output xls file... wt_H3K4me3_peaks.xls
INFO @ Mon, 27 May 2024 13:37:27: #4 Write peak in narrowPeak format file... wt_H3K4me3_peaks.narrowPeak
INFO @ Mon, 27 May 2024 13:37:27: #4 Write summits bed file... wt_H3K4me3_summits.bed
INFO @ Mon, 27 May 2024 13:37:27: Done!
```
**Traceback:**
* ```console
```
**Job Parameters:**
* | Job parameter | Parameter value |
| ------------- | --------------- |
| \_\_input\_ext | ` "input" ` |
| \_\_workflow\_invocation\_uuid\_\_ | ` "acba36a01c2d11ef8884f526e62285ed" ` |
| advanced\_options | ` {"broad_options": {"__current_case__": 1, "broad_options_selector": "nobroad", "call_summits": true}, "buffer_size": "100000", "d_min": "20", "keep_dup_options": {"__current_case__": 1, "keep_dup_options_selector": "1"}, "llocal": null, "nolambda": false, "ratio": null, "slocal": null, "spmr": true, "to_large": false} ` |
| chromInfo | ` "/cvmfs/data.galaxyproject.org/managed/len/ucsc/mm10.len" ` |
| control | ` {"__current_case__": 1, "c_select": "No"} ` |
| cutoff\_options | ` {"__current_case__": 1, "cutoff_options_selector": "qvalue", "qvalue": "0.05"} ` |
| dbkey | ` "mm10" ` |
| effective\_genome\_size\_options | ` {"__current_case__": 4, "effective_genome_size_options_selector": "user_defined", "gsize": "1870000000"} ` |
| format | ` "BAMPE" ` |
| nomodel\_type | ` {"__current_case__": 0, "band_width": "300", "mfold_lower": "5", "mfold_upper": "50", "nomodel_type_selector": "create_model"} ` |
| outputs | ` ["peaks_tabular", "summits", "bdg", "html"] ` |
| treatment | ` {"__current_case__": 0, "input_treatment_file": {"values": [{"id": 10, "src": "dce"}]}, "t_multi_select": "No"} ` |
</details>
</details>
Attention: deployment skipped!
https://github.com/galaxyproject/iwc/actions/runs/9285551791
Hello! This is an automated update of the following workflow: workflows/epigenetics/chipseq-pe. I created this PR because I think one or more of the component tools are out of date, i.e. there is a newer version available on the ToolShed.
By comparing with the latest versions available on the ToolShed, it seems the following tools are outdated:
toolshed.g2.bx.psu.edu/repos/lparsons/cutadapt/cutadapt/4.8+galaxy0
should be updated totoolshed.g2.bx.psu.edu/repos/lparsons/cutadapt/cutadapt/4.8+galaxy1
toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.3+galaxy0
should be updated totoolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.3+galaxy1
The workflow release number has been updated from 0.9 to 0.10.