NCI-CGR / gatk-sv

A structural variation pipeline for short-read sequencing - modified to run on HPC
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

GATK-SV

A structural variation discovery pipeline for Illumina short-read whole-genome sequencing (WGS) data.

Table of Contents

Requirements

Deployment and execution:

Alternative backends

Because GATK-SV has been tested only on the Google Cloud Platform (GCP), we are unable to provide specific guidance or support for other execution platforms including HPC clusters and AWS. Contributions from the community to improve portability between backends will be considered on a case-by-case-basis. We ask contributors to please adhere to the following guidelines when submitting issues and pull requests:

  1. Code changes must be functionally equivalent on GCP backends, i.e. not result in changed output
  2. Increases to cost and runtime on GCP backends should be minimal
  3. Avoid adding new inputs and tasks to workflows. Simpler changes are more likely to be approved, e.g. small in-line changes to scripts or WDL task command sections
  4. Avoid introducing new code paths, e.g. conditional statements
  5. Additional backend-specific scripts, workflows, tests, and Dockerfiles will not be approved
  6. Changes to Dockerfiles may require extensive testing before approval

We still encourage members of the community to adapt GATK-SV for non-GCP backends and share code on forked repositories. Here are a some considerations:

Data:

PED file format

The PED file format is described here. Note that GATK-SV imposes additional requirements:

Sample Exclusion

We recommend filtering out samples with a high percentage of improperly paired reads (>10% or an outlier for your data) as technical outliers prior to running GatherSampleEvidence. A high percentage of improperly paired reads may indicate issues with library prep, degradation, or contamination. Artifactual improperly paired reads could cause incorrect SV calls, and these samples have been observed to have longer runtimes and higher compute costs for GatherSampleEvidence.

Sample ID requirements:

Sample IDs must:

Sample IDs should not:

The same requirements apply to family IDs in the PED file, as well as batch IDs and the cohort ID provided as workflow inputs.

Sample IDs are provided to GatherSampleEvidence directly and need not match sample names from the BAM/CRAM headers. GetSampleID.wdl can be used to fetch BAM sample IDs and also generates a set of alternate IDs that are considered safe for this pipeline; alternatively, this script transforms a list of sample IDs to fit these requirements. Currently, sample IDs can be replaced again in GatherBatchEvidence.

The following inputs will need to be updated with the transformed sample IDs:

Citation

Please cite the following publication: Collins, Brand, et al. 2020. "A structural variation reference for medical and population genetics." Nature 581, 444-451.

Additional references: Werling et al. 2018. "An analytical framework for whole-genome sequence association studies and its implications for autism spectrum disorder." Nature genetics 50.5, 727-736.

Quickstart

WDLs

There are two scripts for running the full pipeline:

Building inputs

Example workflow inputs can be found in /inputs. Build using scripts/inputs/build_default_inputs.sh, which generates input jsons in /inputs/build. Except the MELT docker image, all required resources are available in public Google buckets.

Some workflows require a Google Cloud Project ID to be defined in a cloud environment parameter group. Workspace builds require a Terra billing project ID as well. An example is provided at /inputs/values/google_cloud.json but should not be used, as modifying this file will cause tracked changes in the repository. Instead, create a copy in the same directory with the format google_cloud.my_project.json and modify as necessary.

Note that these inputs are required only when certain data are located in requester pays buckets. If this does not apply, users may use placeholder values for the cloud configuration and simply delete the inputs manually.

MELT

Important: The example input files contain MELT inputs that are NOT public (see Requirements). These include:

The input values are provided only as an example and are not publicly accessible. In order to include MELT, these values must be provided by the user. MELT can be disabled by deleting these inputs and setting GATKSVPipelineBatch.use_melt to false.

Execution

We recommend running the pipeline on a dedicated Cromwell server with a cromshell client. A batch run can be started with the following commands:

> mkdir gatksv_run && cd gatksv_run
> mkdir wdl && cd wdl
> cp $GATK_SV_ROOT/wdl/*.wdl .
> zip dep.zip *.wdl
> cd ..
> echo '{ "google_project_id": "my-google-project-id", "terra_billing_project_id": "my-terra-billing-project" }' > inputs/values/google_cloud.my_project.json
> bash scripts/inputs/build_default_inputs.sh -d $GATK_SV_ROOT -c google_cloud.my_project
> cp $GATK_SV_ROOT/inputs/build/ref_panel_1kg/test/GATKSVPipelineBatch/GATKSVPipelineBatch.json GATKSVPipelineBatch.my_run.json
> cromshell submit wdl/GATKSVPipelineBatch.wdl GATKSVPipelineBatch.my_run.json cromwell_config.json wdl/dep.zip

where cromwell_config.json is a Cromwell workflow options file. Note users will need to re-populate batch/sample-specific parameters (e.g. BAMs and sample IDs).

Pipeline Overview

The pipeline consists of a series of modules that perform the following:

Repository structure:

Cohort mode

A minimum cohort size of 100 is required, and a roughly equal number of males and females is recommended. For modest cohorts (~100-500 samples), the pipeline can be run as a single batch using GATKSVPipelineBatch.wdl.

For larger cohorts, samples should be split up into batches of about 100-500 samples. Refer to the Batching section for further guidance on creating batches.

The pipeline should be executed as follows:

Note: GatherBatchEvidence requires a trained gCNV model.

Batching

For larger cohorts, samples should be split up into batches of about 100-500 samples with similar characteristics. We recommend batching based on overall coverage and dosage score (WGD), which can be generated in EvidenceQC. An example batching process is outlined below:

  1. Divide the cohort into PCR+ and PCR- samples
  2. Partition the samples by median coverage from EvidenceQC, grouping samples with similar median coverage together. The end goal is to divide the cohort into roughly equal-sized batches of about 100-500 samples; if your partitions based on coverage are larger or uneven, you can partition the cohort further in the next step to obtain the final batches.
  3. Optionally, divide the samples further by dosage score (WGD) from EvidenceQC, grouping samples with similar WGD score together, to obtain roughly equal-sized batches of about 100-500 samples
  4. Maintain a roughly equal sex balance within each batch, based on sex assignments from EvidenceQC

Single-sample mode

GATKSVPipelineSingleSample.wdl runs the pipeline on a single sample using a fixed reference panel. An example run with reference panel containing 156 samples from the NYGC 1000G Terra workspace can be found in inputs/build/NA12878/test after building inputs).

gCNV Training

Both the cohort and single-sample modes use the GATK-gCNV depth calling pipeline, which requires a trained model as input. The samples used for training should be technically homogeneous and similar to the samples to be processed (i.e. same sample type, library prep protocol, sequencer, sequencing center, etc.). The samples to be processed may comprise all or a subset of the training set. For small, relatively homogenous cohorts, a single gCNV model is usually sufficient. If a cohort contains multiple data sources, we recommend training a separate model for each batch or group of batches with similar dosage score (WGD). The model may be trained on all or a subset of the samples to which it will be applied; a reasonable default is 100 randomly-selected samples from the batch (the random selection can be done as part of the workflow by specifying a number of samples to the n_samples_subsample input parameter in /wdl/TrainGCNV.wdl).

Generating a reference panel

New reference panels can be generated easily from a single run of the GATKSVPipelineBatch workflow. If using a Cromwell server, we recommend copying the outputs to a permanent location by adding the following option to the workflow configuration file:

  "final_workflow_outputs_dir" : "gs://my-outputs-bucket",
  "use_relative_output_paths": false,

Here is an example of how to generate workflow input jsons from GATKSVPipelineBatch workflow metadata:

> cromshell -t60 metadata 38c65ca4-2a07-4805-86b6-214696075fef > metadata.json
> python scripts/inputs/create_test_batch.py \
    --execution-bucket gs://my-exec-bucket \
    --final-workflow-outputs-dir gs://my-outputs-bucket \
    metadata.json \
    > inputs/values/my_ref_panel.json
> # Define your google project id (for Cromwell inputs) and Terra billing project (for workspace inputs)
> echo '{ "google_project_id": "my-google-project-id", "terra_billing_project_id": "my-terra-billing-project" }' > inputs/values/google_cloud.my_project.json
> # Build test files for batched workflows (google cloud project id required)
> python scripts/inputs/build_inputs.py \
    inputs/values \
    inputs/templates/test \
    inputs/build/my_ref_panel/test \
    -a '{ "test_batch" : "ref_panel_1kg", "cloud_env": "google_cloud.my_project" }'
> # Build test files for the single-sample workflow
> python scripts/inputs/build_inputs.py \
    inputs/values \
    inputs/templates/test/GATKSVPipelineSingleSample \
    inputs/build/NA19240/test_my_ref_panel \
    -a '{ "single_sample" : "test_single_sample_NA19240", "ref_panel" : "my_ref_panel" }'
> # Build files for a Terra workspace
> python scripts/inputs/build_inputs.py \
    inputs/values \
    inputs/templates/terra_workspaces/single_sample \
    inputs/build/NA12878/terra_my_ref_panel \
    -a '{ "single_sample" : "test_single_sample_NA12878", "ref_panel" : "my_ref_panel" }'

Note that the inputs to GATKSVPipelineBatch may be used as resources for the reference panel and therefore should also be in a permanent location.

Module Descriptions

The following sections briefly describe each module and highlights inter-dependent input/output files. Note that input/output mappings can also be gleaned from GATKSVPipelineBatch.wdl, and example input templates for each module can be found in /inputs/templates/test.

GatherSampleEvidence

Formerly Module00a

Runs raw evidence collection on each sample with the following SV callers: Manta, Wham, and/or MELT. For guidance on pre-filtering prior to GatherSampleEvidence, refer to the Sample Exclusion section.

Note: a list of sample IDs must be provided. Refer to the sample ID requirements for specifications of allowable sample IDs. IDs that do not meet these requirements may cause errors.

Inputs:

Outputs:

EvidenceQC

Formerly Module00b

Runs ploidy estimation, dosage scoring, and optionally VCF QC. The results from this module can be used for QC and batching.

For large cohorts, this workflow can be run on arbitrary cohort partitions of up to about 500 samples. Afterwards, we recommend using the results to divide samples into smaller batches (~100-500 samples) with ~1:1 male:female ratio. Refer to the Batching section for further guidance on creating batches.

We also recommend using sex assignments generated from the ploidy estimates and incorporating them into the PED file, with sex = 0 for sex aneuploidies.

Prerequisites:

Inputs:

Outputs:

Preliminary Sample QC

The purpose of sample filtering at this stage after EvidenceQC is to prevent very poor quality samples from interfering with the results for the rest of the callset. In general, samples that are borderline are okay to leave in, but you should choose filtering thresholds to suit the needs of your cohort and study. There will be future opportunities (as part of FilterBatch) for filtering before the joint genotyping stage if necessary. Here are a few of the basic QC checks that we recommend:

TrainGCNV

Trains a gCNV model for use in GatherBatchEvidence. The WDL can be found at /wdl/TrainGCNV.wdl. See the gCNV training overview for more information.

Prerequisites:

Inputs:

Outputs:

GatherBatchEvidence

Formerly Module00c

Runs CNV callers (cn.MOPS, GATK-gCNV) and combines single-sample raw evidence into a batch. See above for more information on batching.

Prerequisites:

Inputs:

Outputs:

ClusterBatch

Formerly Module01

Clusters SV calls across a batch.

Prerequisites:

Inputs:

Outputs:

GenerateBatchMetrics

Formerly Module02

Generates variant metrics for filtering.

Prerequisites:

Inputs:

Outputs:

FilterBatch

Formerly Module03

Filters poor quality variants and filters outlier samples. This workflow can be run all at once with the WDL at wdl/FilterBatch.wdl, or it can be run in three steps to enable tuning of outlier filtration cutoffs. The three subworkflows are:

  1. FilterBatchSites: Per-batch variant filtration
  2. PlotSVCountsPerSample: Visualize SV counts per sample per type to help choose an IQR cutoff for outlier filtering, and preview outlier samples for a given cutoff
  3. FilterBatchSamples: Per-batch outlier sample filtration; provide an appropriate outlier_cutoff_nIQR based on the SV count plots and outlier previews from step 2. Note that not removing high outliers can result in increased compute cost and a higher false positive rate in later steps.

Prerequisites:

Inputs:

Outputs:

MergeBatchSites

Formerly MergeCohortVcfs

Combines filtered variants across batches. The WDL can be found at: /wdl/MergeBatchSites.wdl.

Prerequisites:

Inputs:

Outputs:

GenotypeBatch

Formerly Module04

Genotypes a batch of samples across unfiltered variants combined across all batches.

Prerequisites:

Inputs:

Outputs:

RegenotypeCNVs

Formerly Module04b

Re-genotypes probable mosaic variants across multiple batches.

Prerequisites:

Inputs:

Outputs:

MakeCohortVcf

Formerly Module0506

Combines variants across multiple batches, resolves complex variants, re-genotypes, and performs final VCF clean-up.

Prerequisites:

Inputs:

Outputs:

Module 07 (in development)

Apply downstream filtering steps to the cleaned VCF to further control the false discovery rate; all steps are optional and users should decide based on the specific purpose of their projects.

Filtering methods include:

AnnotateVcf (in development)

Formerly Module08Annotation

Add annotations, such as the inferred function and allele frequencies of variants, to final VCF.

Annotations methods include:

Module 09 (in development)

Visualize SVs with IGV screenshots and read depth plots.

Visualization methods include:

CI/CD

This repository is maintained following the norms of continuous integration (CI) and continuous delivery (CD). GATK-SV CI/CD is developed as a set of Github Actions workflows that are available under the .github/workflows directory. Please refer to the workflow's README for their current coverage and setup.

Troubleshooting

VM runs out of memory or disk

Calculated read length causes error in MELT workflow

Example error message from GatherSampleEvidence.MELT.GetWgsMetrics:

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: The requested index 701766 is out of counter bounds. Possible cause of exception can be wrong READ_LENGTH parameter (much smaller than actual read length)

This error message was observed for a sample with an average read length of 117, but for which half the reads were of length 90 and half were of length 151. As a workaround, override the calculated read length by providing a read_length input of 151 (or the expected read length for the sample in question) to GatherSampleEvidence.