nf-core/pathogensurveillance is a population genomic pipeline for pathogen diagnosis, variant detection, and biosurveillance. The pipeline accepts the paths to raw reads for one or more organisms (in the form of a CSV file) and creates reports in the form of interactive HTML reports or PDF documents. Significant features include the ability to analyze unidentified eukaryotic and prokaryotic samples, creation of reports for multiple user-defined groupings of samples, automated discovery and downloading of reference assemblies from NCBI RefSeq, and rapid initial identification based on k-mer sketches followed by a more robust core genome phylogeny and SNP-based phylogeny.
The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!
On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world data sets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources.The results obtained from the full-sized test can be viewed on the nf-core website.
Install Nextflow
(>=21.10.3
)
Install any of Docker
, Singularity
(you can follow this tutorial), Podman
, Shifter
or Charliecloud
for full pipeline reproducibility (you can use Conda
both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs).
Download the pipeline and test it on a minimal dataset with a single command:
nextflow run nf-core/pathogensurveillance -profile test,YOURPROFILE --outdir <OUTDIR> -resume
Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile (YOURPROFILE
in the example command above). You can chain multiple config profiles in a comma-separated string.
- The pipeline comes with config profiles called
docker
,singularity
,podman
,shifter
,charliecloud
andconda
which instruct the pipeline to use the named tool for software management. For example,-profile test,docker
.- Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use
-profile <institute>
in your command. This will enable eitherdocker
orsingularity
and set the appropriate execution settings for your local compute environment.- If you are using
singularity
, please use thenf-core download
command to download images first, before running the pipeline. Setting theNXF_SINGULARITY_CACHEDIR
orsingularity.cacheDir
Nextflow options enables you to store and re-use the images from a central location for future pipeline runs.- If you are using
conda
, it is highly recommended to use theNXF_CONDA_CACHEDIR
orconda.cacheDir
settings to store the environments in a central location for future pipeline runs.
Start running your own analysis!
nextflow run nf-core/pathogensurveillance --input samplesheet.csv --outdir <OUTDIR> -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> -resume
You can also try running a small example dataset hosted with the source code using the following command (no need to download anything):
nextflow run nf-core/pathogensurveillance --input https://raw.githubusercontent.com/grunwaldlab/pathogensurveillance/master/test/data/metadata_small.csv --outdir test_out --download_bakta_db true -profile docker -resume
The nf-core/pathogensurveillance pipeline comes with documentation about the pipeline usage, parameters and output.
The primary input to the pipeline is a CSV (comma comma-separated value) file.
This can be made in a spreadsheet program like LibreOffice Calc or Microsoft Excel by exporting to CSV.
Columns can be in any order and unneeded columns can be left out or left blank.
Column names are case insensitive and spaces are equivalent to underscores.
Only a single column containing either paths to raw sequence data, SRA (Sequence Read Archive) accessions, or NCBI queries to search the SRA is required and each sample can have values in different columns.
Any columns not recognized by pathogensurveillance
will be ignored, allowing users to adapt existing sample metadata table by adding new columns.
Below is a description of each column used by pathogensurveillance
:
path
and ncbi_accession
columns), although the same sequence data can be used by different IDs. Any values supplied that correspond to different sources of sequence data or contain characters that cannot appear in file names (\/:*?"<>| .) will be modified automatically. If not supplied, it will be inferred from the path
, ncbi_accession
, or name
columns.sample_id
.path_2
is used for the reverse reads. This can be a local file path or a URL to an online location. The sequence_type
column must have a value.sequence_type
column must have a value.sequence_type
column will be looked up if not supplied.ncbi_query_max
column. Values in the sample_id
, name
, and description
columns will be append to that supplied by the user. Values in the sequence_type
column will be looked up and does not need to be supplied by the user.ncbi_query
column. Adding a %
to the end of a number indicates a percentage of the total number of results instead of a count. A random of subset of results will be downloaded if ncbi_query_max
is less than "100%" or the total number of results.reads_1
and reads_2
columns. Valid values include anything containing the words "illumina", "nanopore", or "pacbio". Will be looked up automatically for ncbi_accession
and ncbi_query
inputs but must be supplied by the user for path
inputs.all;subset
will put the sample in both all
and subset
report groups. Samples will be added to a default group if this is not supplied.ref_group_ids
or ref_id
columns of the reference metadata CSV.Additionally, users can supply a reference metadata CSV that can be used to assign custom references to particular samples.
References are assigned to samples if they share a reference group ID in the ref_group_ids
columns that can appear in both input CSVs.
The reference metadata CSV can have the following columns:
ref_group_ids
column of the sample metadata CSV to assign references to particular samples. ref_id: The unique identifier for each user-defined reference genome. This will be used in file names to distinguish samples in the output. Each reference ID must correspond to a single source of reference data (The ref_path
, ref_ncbi_accession
, and ref_ncbi_query
columns), although the same reference data can be used by multiple IDs. Any values that correspond to different sources of reference data or contain characters that cannot appear in file names (\/:?"<>| .) will be modified automatically. If not supplied, it will be inferred from the path
, ref_name
columns or supplied automatically when ref_ncbi_accession
or ref_ncbi_query
are used.ref_path
and ref_ncbi_accession
columns), although the same sequence data can be used by different IDs. Any values supplied that correspond to different sources of reference data or contain characters that cannot appear in file names (\/:*?"<>| .) will be modified automatically. If not supplied, it will be inferred from the ref_path
, ref_ncbi_accession
, or ref_name
columns.ref_id
. It will be supplied automatically when the ref_ncbi_query
column is used.ref_name
. It will be supplied automatically when the ref_ncbi_query
column is used.ref_ncbi_query_max
column. Values in the ref_id
, ref_name
, and ref_description
columns will be append to that supplied by the user.ref_ncbi_query
column. Adding a %
to the end of a number indicates a percentage of the total number of results instead of a count. A random of subset of results will be downloaded if ncbi_query_max
is less than "100%" or the total number of results.nf-core/pathogensurveillance was originally written by Zachary S.L. Foster, Martha Sudermann, Nicholas C. Cauldron, Fernanda I. Bocardo, Hung Phan, Jeff H. Chang, Niklaus J. Grünwald.
We thank the following people for their extensive assistance in the development of this pipeline:
If you would like to contribute to this pipeline, please see the contributing guidelines.
For further information or help, don't hesitate to get in touch on the Slack #pathogensurveillance
channel (you can join with this invite).
An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md
file.
You can cite the nf-core
publication as follows:
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.