dnanexus / dxCompiler

WDL and CWL compiler for the DNAnexus platform
Apache License 2.0
32 stars 16 forks source link

variable declaration prevents workflow from running in parallel in dxCompiler 2.10.2 #378

Closed Ben-Habermeyer closed 1 year ago

Ben-Habermeyer commented 2 years ago

Hi dxCompiler team,

One of the scientists I work with is reporting an issue with one of her workflows in that an execution is not starting on DNAnexus even though all of its inputs are satisfied.

Here is the basic workflow definition. When run on DNAnexus, it appears that the tasks mutect2 strelka and the scatter block all execute concurrently, however muse does not. It waits for the scatter block to be complete before executing, even though none of its inputs require anything from the block.

Moving muse up in the WDL does not resolve the issue.

version 1.0

import "bam_tools.wdl" as bam_tools
import "vcf_tools.wdl" as vcf_tools
import "mutect2.wdl" as mutect2
import "muse.wdl" as muse

workflow call_snps {
    input {
        File tumor_bam
        File normal_bam
        File genome_tarball
        File intervals
        File known_sites
        File known_sites_index
    }

    call mutect2.mutect2 {
        input:
            tumor_bam = tumor_bam,
            normal_bam = normal_bam,
            genome_tarball = genome_tarball,
            intervals = intervals
    }

    call strelka {
        input:
            tumor_bam = tumor_bam,
            normal_bam = normal_bam,
            genome_tarball = genome_tarball
    }

    Array[String] chr_num = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17",  "18", "19", "20", "21", "22", "X", "Y", "M"]
    Array[String] chrs = prefix("chr", chr_num)

    scatter (chr in chrs) {
        call bam_tools.split_bam as split_tumor_bam {
            input:
                chr = chr,
                bam = tumor_bam,
        }

        call bam_tools.split_bam as split_normal_bam {
            input:
                chr = chr,
                bam = normal_bam,
        }

        call lofreq {
            input:
                tumor_bam = split_tumor_bam.chr_bam,
                normal_bam = split_normal_bam.chr_bam,
                genome_tarball = genome_tarball,
                chr = chr,
                known_sites = known_sites,
                known_sites_index = known_sites_index,
        }
    }

    call muse.muse {
        input:
            tumor_bam = tumor_bam,
            normal_bam = normal_bam,
            genome_tarball = genome_tarball,
            chrs = chrs,
            known_sites = known_sites,
    }

    call vcf_tools.bcftools_concat_vcfs as merge_lofreq {
        input:
            vcfs = select_all(lofreq.somatic_vcf),
            caller_name = "lofreq"
    }

    call vcf_tools.intersect_calls as intersect_calls {
        input:
            lofreq_calls = merge_lofreq.merged_vcf,
            muse_calls = muse.muse_vcf,
            mutect2_calls = mutect2.filtered_vcf,
            strelka_calls = strelka.somatic_snv_vcf,
    }

    output {
        File mutect2_vcf = mutect2.filtered_vcf
        File mutect2_filtering_stats = mutect2.filtering_stats
        File strelka_snv_vcf = strelka.somatic_snv_vcf
        File strelka_indel_vcf = strelka.somatic_indel_vcf
        File muse_vcf = muse.muse_vcf
        File lofreq_vcf = merge_lofreq.merged_vcf
        File intersected_calls = intersect_calls.intersected_calls
    }
}

In order to to get them to run concurrently the user reported:

  1. I moved Array[String] chr_num = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "X", "Y", "M"] Array[String] chrs = prefix("chr", chr_num) Into defaults.json​ as "chrs": ["chr1", "chr2", etc.]
  2. I moved LoFreq into a separate workflow and passed it the array chrs​
  3. I complied with dxWDL 1.50. I know this one is deprecated, but I think some of the issues I've been having recently with DNAnexus are coming from dxCompiler.

There appears to be something about the way this sequence of commands is compiled by dxCompiler that prevents it from running in parallel (i.e. muse must run after the scatter) - let me know what you think and what the solution(d) may be.

emiloslavsky commented 2 years ago

I think this is the issue discussed in https://github.com/dnanexus/dxWDL/issues/162 where wdl expression evaluations in a WDL workflow gets merged into a fragment for the first following applet.

Referring to the following basic test:

version 1.0
task t1 {
  input {
    String s
  }
  command <<<
  echo t1 >s
   >>>
   output {
     File out = "s"
   }
}

task t2 {
  input {
    Array[String] s
  }
  command <<<
    echo t1 >s
  >>>
  output {
    File out = "s"
  }
}

task dummy {
  input {
  }
command <<<
>>>
  output {
  }
}

workflow tinytest3 {
    Array[String] chr_num = ["1"]
    Array[String] chrs = prefix("chr", chr_num)
        call dummy

    call t2 {input: s = chrs}
    scatter (chr in chrs) {
        call t1 { input: s = chr }
    }
    output {
        Array[File] out1 = t1.out
        File out2 = t2.out
    }
}

With dxCompiler 2.10.0

  1. The following makes a dnanexus workflow where t2 depends on scatter

    workflow tinytest {
    Array[String] chr_num = ["1"]
    Array[String] chrs = prefix("chr", chr_num)
    scatter (chr in chrs) {
    call t1 { input: s = chr }
    }
    call t2 {input: s = chrs}
  2. The following makes a dnanexus workflow where scatter depends on t2

    workflow tinytest2 {
    Array[String] chr_num = ["1"]
    Array[String] chrs = prefix("chr", chr_num)
    
    call t2 {input: s = chrs}
    scatter (chr in chrs) {
    call t1 { input: s = chr }
    }
  3. The following makes a dnanexus workflow where scatter and t2 both depend on dummy, but not on each other.

    workflow tinytest3 {
    Array[String] chr_num = ["1"]
    Array[String] chrs = prefix("chr", chr_num)
    call dummy
    
    call t2 {input: s = chrs}
    scatter (chr in chrs) {
    call t1 { input: s = chr }
    }

The dependency can be seen by examining the inputs to each stage. E.g. for tinytest3:

$ dx describe workflow-GGGQzJ00Bvbj46PP4gjFyXP9 --json | jq '.stages|.[]|{id: .id,name: .name, input: .input}'
{
  "id": "stage-common",
  "name": "common",
  "input": {}
}
{
  "id": "stage-1",
  "name": "frag dummy",
  "input": {}
}
{
  "id": "stage-2",
  "name": "t2",
  "input": {
    "s": {
      "$dnanexus_link": {
        "outputField": "chrs",
        "stage": "stage-1"
      }
    }
  }
}
{
  "id": "stage-4",
  "name": "scatter (chr in chrs)",
  "input": {
    "chrs": {
      "$dnanexus_link": {
        "outputField": "chrs",
        "stage": "stage-1"
      }
    }
  }
}
{
  "id": "stage-outputs",
  "name": "outputs",
  "input": {
    "t1___out": {
      "$dnanexus_link": {
        "outputField": "t1___out",
        "stage": "stage-4"
      }
    },
    "t2___out": {
      "$dnanexus_link": {
        "outputField": "out",
        "stage": "stage-2"
      }
    }
  }
}
emiloslavsky commented 2 years ago

So it's a known behavior that reduces the number of jobs at the expense of potential decrease in concurrency of execution, with a known workaround (see tinytest3). We can consider making identification of such situations easier or changing the behavior of dxCompiler to place expression evaluation into a separate job when multiple tasks depend on the results of expression evaluation.

Gvaihir commented 2 years ago

@Ben-Habermeyer is there a reason to declare chr_num and chrs in the body of the WF, instead of declaring them at the workflow level inputs (with defaults)? This way muse should depend on a common stage and not on the scatter.

Ben-Habermeyer commented 2 years ago

Hi @Gvaihir thanks for the reply. Yes that is the approach we went with. I think the user was confused why placing the variables in the body of the WF resulted in this behavior

Gvaihir commented 2 years ago

I'll update the doc/Internals.md with regards to that + recommendations. Short answer - pls avoid declaring variables which don't depend on the outputs of any task/workflow in the body. Allocate those things to the top level inputs - this will eliminate unwanted dependencies. I don't envision any immediate changes to the current logic of dxCompiler with regards to that so will probably close the issue shortly. Otherwise, will update here