WatsonLab / KOunt

Snakemake pipeline calculating KEGG orthologue abundance in metagenomic sequence data.
GNU General Public License v3.0
24 stars 0 forks source link

KOunt

Snakemake pipeline calculating KEGG orthologue abundance in metagenomic sequence data.

Documentation

KOunt is a Snakemake pipeline that calculates the abundance of KEGG orthologues (KOs) in metagenomic sequence data. KOunt takes raw paired-end reads and quality trims, assembles, predicts proteins and annotates them with KofamScan. The reads are mapped to the assembly and protein coverage calculated. Users have the option of calculating coverage evenness of the proteins and filtering the KofamScan proteins to remove unevenly covered proteins. The proteins annotated by KofamScan are clustered at 100%, 90% and 50% identity within each KO to quantify their diversity; as using the evenness filtering option reduces the numbers of these proteins we don't recommend using the evenness option if you are interested in the clustering results. All predicted proteins that don’t have a KO hit or are excluded by evenness filtering are called 'NoHit’. The NoHit proteins are blasted against a custom UniProt database annotated with a KO and the nucleotides against a custom RNA database. Reads mapped to NoHit proteins that remain unannotated and unmapped reads are blasted against the KOunt databases and RNA quantified in the remaining reads.

If you use KOunt please cite https://academic.oup.com/bioinformatics/article/39/8/btad483/7236497

Workflow

Installation

Dependencies

Install Conda or Miniconda

Source

Download the latest version of the Snakefile, scripts and conda env files.

git clone https://github.com/WatsonLab/KOunt
cd KOunt/

Check that the scripts are executable, if not do: chmod +x scripts/*sh

Prepare the reference databases

Download the KOunt UniProt and RNA databases.

wget https://figshare.com/ndownloader/files/37711530
mv 37711530 KOunt_databases.tar
tar -xzvf KOunt_databases.tar
gunzip KOunt_databases_v1/*
rm KOunt_databases.tar

If you wish to update these databases, further information on how they were created is available here.

Install Snakemake

conda create -n snakemake_mamba -c conda-forge -c bioconda mamba=1.0.0
conda activate snakemake_mamba
mamba install -c bioconda snakemake=7.22.0

Download test data

Download the test fastqs.

wget https://figshare.com/ndownloader/files/39545968
mv 39545968 test_fastqs.tar
tar -xvf test_fastqs.tar
rm test_fastqs.tar

Install the conda environments

conda activate snakemake_mamba
snakemake --conda-create-envs-only --cores 1

Test installation

Leave the raw reads location in the config at default and perform a dry-run with the reads subsampled from ERR2027889. Then run the pipeline. With 8 cores it should take approximately 20 minutes.

snakemake -k --ri -n
snakemake -k --ri --cores 8

Running KOunt

Amend the options config file, config.yaml, with your fastq file locations and extensions. KOunt expects the raw reads to be in a directory with the same sample name eg. ERR2027889/ERR2027889_R1.fastq.gz. It runs the pipeline on all the samples in the directory you specify in the config file. To use the default rule all in the Snakefile specify the number of cores you have available and run the entire pipeline with:

snakemake -k --ri --cores 8

If you wish to only run part of the pipeline you can specify another rule all.

To perform all steps but the protein clustering use:

snakemake -k --ri all_without_clustering --cores 8

To perform all steps but protein clustering and read/protein annotation with the KOunt reference databases:

snakemake -k --ri all_without_reference --cores 8

To perform all steps but protein clustering and RNA abundance quantification:

snakemake -k --ri all_without_RNA --cores 8

Estimated run times and memory usage

The average run time and maximum memory used by each of the rules on the 10 samples from the KOunt manuscript is available here.

Options

The following options can be amended in the config.yaml file:

The read ids in the trimmed reads are shortened up to the first space and /1 or /2 added to the end if not already present. By default the read ids are compared to ensure all ids are unique but this can be changed if you're sure they will be

The abundance of proteins that KofamScan annotates with multiple KOs can either be split between the KOs or summed with all other proteins with multiple hits into 'Multiples'

Raw read trimming (rule trim)

Assembly (rule megahit)

Mapping (rule bwa)

Coverage (rule coverage)

KEGG database download (rule kegg_db)

Kofamscan (rule kofamscan)

Kofamscan results (rule kofamscan_results)

CD-HIT (rule cdhit)

MMseqs2 KOs (rule mmseq_keggs)

MMseqs2 NoHit (rule mmseq_nohit)

Diamond (rule diamond_search)

Barrnap (rule barrnap)

Annotate NoHit reads (rule nohit_annotate_reads)

Kallisto (rule kallisto)

Unmapped read annotation (rule unmapped_reads)

Output

Default

Results/KOunts_Kofamscan.csv KO abundance in each sample, calculated by Kofamscan, without read mapping Results/All_KOunts_nohit_unmapped_default.csv Final KO abundance in each sample
Results/Number_of_clusters.csv Number of clusters of proteins at 90% and 50% sequence identity in each KO, the number of clusters that contain multiple proteins and the number of singleton clusters

Without clustering

Results/KOunts_Kofamscan.csv KO abundance in each sample, calculated by Kofamscan, without read mapping Results/All_KOunts_nohit_unmapped_no_clustering.csv Final KO abundance in each sample

Without reference databases

Results/All_KOunts_without_reference.csv Final KO abundance in each sample calculated by Kofamscan, without read mapping

Without RNA

Results/KOunts_Kofamscan_without_clustering.csv KO abundance in each sample, calculated by Kofamscan, without read mapping Results/All_KOunts_without_RNA.csv Final KO abundance in each sample