Open LiuH2020 opened 1 year ago
Hello,The error above has been resolved with using other data (The error should be casused by the damaged bam or bai file).Now, I run the lumpy_calls step with some error:
INFO [job gunzip_manta] completed success
INFO [step gunzip_manta] completed success
INFO [workflow sv_calling] starting step lumpy_calls
INFO [step lumpy_calls] start
ERROR Exception on step 'lumpy_calls'
ERROR Cannot make scatter job: Missing required secondary file 'sample.discordant.bam.bai' from file object: {
"location": "file:///tmp/iyj1x0ux/sample.discordant.bam",
"basename": "sample.discordant.bam",
"nameroot": "sample.discordant",
"nameext": ".bam",
"class": "File",
"checksum": "sha1$1c2d7d73d12173a785089940ce7b4b5148ac9231",
"size": 11082194,
"http://commonwl.org/cwltool#generation": 0,
"secondaryFiles": []
}
WARNING [step lumpy_calls] completed permanentFail
INFO [workflow sv_calling] completed permanentFail
WARNING [step sv_calling] completed permanentFail
INFO [workflow ] completed permanentFail
{
"somatic_svs_bedpe": null
}WARNING Final process status is permanentFail
And my code and yml file like this
#code
cwltool /home/liuhui/ctDNA-pipeline1/PACT/pipelines/sv_pipeline.cwl sv_example.yml
#yml file
cat sv_example.yml
# For use with pipelines/sv_pipeline.cwl
# Reference should have .dict and .fai files in same directory
reference:
class: File
path: /home/liuhui/cfDNA-pipeline/hg19_bowtie2/hg19.fa
ref_genome: hg19
# snpEff database. These can be downloaded using java -jar snpEff.jar download <database>.
# Should correspond to reference genome
snpEff_data:
class: Directory
path: /home/liuhui/.conda/envs/PACT/share/snpeff-5.1-2/data/GRCh37.p13
# Paths to cfDNA samples
sample_bams:
- {class: File, path: /home/liuhui/ctDNA-pipeline1/PACT-test/bwa-map/case1.sort.duplicate.bam}
# Paths to matched control samples (ex: plasma depleted whole blood)
# Should be in same order as sample_bams
matched_control_bams:
- {class: File, path: /home/liuhui/ctDNA-pipeline1/PACT-test/bwa-map/ctrl1.sort.duplicate.bam}
# Paths to bams that make up the panel of normals.
panel_of_normal_bams:
- {class: File, path: /home/liuhui/ctDNA-pipeline1/PACT-test/bwa-map/ctrl1.sort.duplicate.bam}
# Standard bed file of targeted regions during sequencing
target_regions:
class: File
path: /home/liuhui/ctDNA-pipeline1/PACT-test/run_PACT/targetRegions1.bed
# Neither breakend of SVs should fall in the blacklisted regions in this bed file
# We recommend the blacklist regions provided by 10xgenomics. Their hg19 bed file is at
# http://cf.10xgenomics.com/supp/genome/hg19/sv_blacklist.bed
neither_region:
class: File
path: /home/liuhui/ctDNA-pipeline1/PACT/example_data/hg19.longranger-blacklist.bed
# A maximum of one breakend for SVs may fall in the regions in this bed file
# We recommend Heng Li's low complexity regions found here
# https://github.com/lh3/varcmp/raw/master/scripts
notboth_region:
class: File
path: /home/liuhui/ctDNA-pipeline1/PACT/example_data/hg19.LCR.bed
I'm not very familiar with the CWL workflow, and could give me some advise for the error? Thank you very much.
Hello, Thank you for the great pipeline! Now, I try to run the pipeline to call SV. I get an error when trying to run sv_pipeline.cwl, the error as below:
my yml file like this
It seems to be casused by the Fail to open index fo example.sample.bam, and maybe the bai file is damaged. I can't reindex example.sample.bam for samtools index example.sample.bam with error.
You can give me some advise? Thank you very much.