EBISPOT / SumStats

Summary statistics with HDF5
6 stars 2 forks source link

SumStats

Build Status

Codacy Badge Codacy Badge

Summary statistics with HDF5

Python version >= 3.5 required

Installation - using Docker

Docker documentation: https://docs.docker.com

You can run all the commands described in the section below on the docker container that you have just launched.

Files produced by the sumstats package (.h5 files) should be generated in the files/output volume

Installation - using conda and pip

Setting properties

Under the config directory you will find the files that are responsible for setting the runtime properties.

properties.py is the default one. It can be altered but you will need to re-install the package in oreder for the changes to take effect.

properties.json can be edited and passed as a an environmental variable as: export SS_CONFIG=<path to json config> when running:

The properties that are being set are:

NOTE: local_h5files_path - h5files_path and local_tsvfiles_path - tsvfiles_path can point the same directories with the same paths (respectively). But you can use the local_ variables to refer to the actual locations where these directories will reside, and the variables missing the local_ prefix when referring to the locations that will be used by gwas-loader, gwas-search, etc. that might be running from a docker container and possibly have different paths and/or directory names on the container.

For example, you might want to store the files locally on your maching under ./files/toload but then mount that same directory on docker under /toload

In this case you will set local_tsvfiles_path=./files/toload and this will be used by the setup_configuration.py script to process and split up your summary statistics file, but you will set tsvfiles_path=/toload and this will be used when you are loading, searching etc data when the loading/searching/etc commands are run from docker.

Default directory layout

The directories that are created are ./files/toload and ./files/output. These do not need to be named as such, and they can be located anywhere you like. You will just need to either provide the toload and output directories as arguments while running via command line,

You can provide the preferred location by:

In the files/output directory 3 subdirectories will be created:

Each one will hold the hdf5 files created with the data loaded by the 3 different loaders. The loaders can be run in parallel. Do not try and store more than one study at a time. This package does not support parallel study loading.

In the configuration file we have set the max bp location and the bp step that we want. Each study is split into chromosomes. Each chromosome sub-set is further split up into <bp_step> pieces based on the range, so bp 0 to max_bp with step bp_range, where bp_range = max_bp / bp_step.

So we loop through the chromosome for (default) 16 ranges of base pair locations, and create separate files for each chromosome. They are then loaded in the corresponding bysnp/<chr>/file_<bp_step>.h5 file.

Loading

Once the package is installed you can load studies and search for studies using the command line toolkit

To load a study it is suggested that you first run the bin/setup_configuration.sh script on the file. This script will copy the file into the files/toload directory, and will split the study up into chromosomes, creating as many files as the chromosomes represented in the study. They will be named chr_filename.

You can then run the below commands to fully load the study in all the formats:

  1. gwas-load -tsv chr_<x>_<filename> -study <study> -chr <x> -loader chr
  2. gwas-load -tsv chr_<x>_<filename> -study <study> -chr <x> -loader snp
  3. gwas-load -tsv <filename> -study <study> -trait <trait> -loader trait

Assumtion:

The script will assume that the tsv file is stored under ./files/toload and that the output direcories will be found under the ./files/output directory (when mounted to docker as shown above, the volumes are placed in those positions)

If you need to specify the location where it resides, modify the properties.json file and set the environmental variable as: export SS_CONFIG=<path to json config>

Note that the loading command for chr and snp loaders need to be run for all the available chromosomes in the study.

Exploring

To explore the contents of the database you can use the following commands:

Note that, if the output directory is set by default to ./files/output in the properties file. If you need to specify the location where it resides, modify the properties.json file and use the -config <path to properties.json> flag to specify it.

Searching

To actually retrieve data from the database you can use the following commands:

The data will by default be retrieved in batches of 20 snps and their info per query. You can loop through the data using the default size of 20 and updating the start flag as you go: -start <start>, or use the flags -start <start> -size <size> to specify the bandwith of your retrieval.

There are two more flags that you can use:

  1. -bp floor:ceil e.g. -bp 10000:20000000 that specifies the range of the base pair location in the chromosome that you want. Makes sense to use when querying for a chromosome, or a trait/study
  2. -pval floor:ceil e.g. -pval 2e-10:5e-5 or -pval 0.000003:4e-3 that specifies the p-value range of the results.

Note that, if the output directory is set by default to ./files/output in the properties file. If you need to specify the location where it resides, modify the properties.json file and use the -config <path to properties.json> flag to specify it.

Exposing the API

To expose the API you need to run: gunicorn -b <host:port> --chdir sumstats/server --access-logfile <path to access log file> --error-logfile <path to error log file> app:app [--log-level <log level>]

You can set the environmental variable as: export SS_CONFIG=<path to json config> to change the default properties, such as the directory where all the data is stored (output directory) as explained in all the above sections.

This will spin up the service and make it available on port 8080 (if running via docker, we exposed the port when we spinned up the container).

You should be able to see the API on http://localhost:8080/