Arcadia-Science / metagenomics

A Nextflow workflow for QC, evaluation, and profiling of metagenomic samples using short- and long-read technologies
MIT License
34 stars 2 forks source link
illumina metagenomics nanopore

Arcadia-Science/metagenomics

Nextflow run with conda run with docker run with singularity Launch on Nextflow Tower

Introduction

Arcadia-Science/metagenomics is a pipeline for profiling metagenomes obtained through either Illumina or Nanopore technologies.

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

Pipeline Summary

This pipeline performs common QC, processing, and profiling steps of metagenomes obtained through either Illumina or Nanopore technologies. The pipeline consists of two separate workflows for processing the Illumina and Nanopore data and producing assemblies. Therefore Illumina or Nanopore samples are processed separately, as this pipeline does not handle hybrid assembly or polishing with short reads. Downstream steps for summarizing the composition of metagenomes are mostly identical for the two technologies. You can find more information about how the pipeline operates in the docs.

Quick Start

  1. Install Nextflow (>=22.10.1)

  2. Install Docker, Singularity (you can follow this tutorial), or Conda. You can use conda to install Nextflow itself but use it for managing software within pipelines as a last resort. We recommend using Docker if possible as this has been tested most frequently. See the nf-core docs) for more information.

  3. Download the pipeline and test it on the minimal datasets for Illumina and Nanopore:

    nextflow run Arcadia-Science/metagenomics -profile test_illumina,YOURPROFILE --outdir <OUTDIR>
    nextflow run Arcadia-Science/metagenomics -profile test_nanopore,YOURPROFILE --outdir <OUTDIR>

Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile (YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string.

  • The pipeline comes with several config profiles, but we recommend using docker when possible, such as -profile test,docker.
  • Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your institute. If so, you can simply use -profile <institute> in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment.
  • If you are using singularity, please use the nf-core download command to download images first, before running the pipeline. Setting the NXF_SINGULARITY_CACHEDIR or singularity.cacheDir Nextflow options enables you to store and re-use the images from a central location for future pipeline runs.
  • If you are using conda, it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs.
  1. Start running your own analysis!

Prior to running the workflow on your samples, you will need to prepare and download databases for sourmash and DIAMOND. There are a number of already prepared sourmash databases available, and you can checkout the DIAMOND documentation for creating a database compatible with diamond blastp used in the workflow. See the usage documentation for more information on how to prepare these databases.

nextflow run Arcadia-Science/metagenomics --input samplesheet.csv --outdir <OUTDIR> --platform <illumina|nanopore> --sourmash_dbs sourmash_dbs_paths.csv --diamond_db prepared_diamond_db.dmnd -profile <docker/singularity/conda/institute>

You can find more information about how to format your input samplesheet CSV and providing the paths to pre-downloaded sourmash and DIAMOND database files in docs/usage.md.

Citations

The nf-core template was used as a guideline for putting this workflow together. You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x. An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.