Summary statistics with HDF5
Python version >= 3.5 required
Docker documentation: https://docs.docker.com
git clone https://github.com/EBISPOT/SumStats.git
cd SumStats
docker build -t sumstats .
-create
flag when running setup_configuration.py and it will create the directory layout as described belowdocker run -i -p 8080:8080 -v $(pwd)/files/toload:/application/files/toload -v $(pwd)/files/output:/application/files/output -v $(pwd)/bin:/scripts -v $(pwd)/config:/application/config -t sumstats
export SS_CONFIG=<path to json config>
if you want to pass in a different configuration than the default onepython /scripts/preparation/load.py -f <file to be loaded> -study <study accession> -trait <efo trait>
You can run all the commands described in the section below on the docker container that you have just launched.
Files produced by the sumstats package (.h5 files) should be generated in the files/output volume
git clone https://github.com/EBISPOT/SumStats.git
cd SumStats
conda env create -f sumstats.yml
conda activate sumstats
pip install -r requirements.txt
pip install .
python bin/preparation/setup_configuration.py -f <path to file to be processed> -config <path to json config>
Under the config
directory you will find the files that are responsible for setting the runtime properties.
properties.py
is the default one. It can be altered but you will need to re-install the package in oreder for the changes to take effect.
properties.json
can be edited and passed as a an environmental variable as: export SS_CONFIG=<path to json config>
when running:
gunicorn -b <host:port> --chdir sumstats/server --access-logfile <path to access log file> --error-logfile <path to error log file> app:app [--log-level <log level>]
to run the APIgwas-search
to search the database via command linegwas-explore
to explore what is saved in the database via command linegwas-load
to load data to the database via command lineThe properties that are being set are:
NOTE: local_h5files_path - h5files_path
and local_tsvfiles_path - tsvfiles_path
can point the same directories with the same paths (respectively).
But you can use the local_
variables to refer to the actual locations where these directories will reside,
and the variables missing the local_
prefix when referring to the locations that will be used by gwas-loader, gwas-search, etc.
that might be running from a docker container and possibly have different paths and/or directory names on the container.
For example, you might want to store the files locally on your maching under ./files/toload but then mount that same directory on docker under /toload
In this case you will set local_tsvfiles_path=./files/toload
and this will be used by the setup_configuration.py script to process and
split up your summary statistics file, but you will set tsvfiles_path=/toload
and this will be used when you are loading, searching etc
data when the loading/searching/etc commands are run from docker.
The directories that are created are ./files/toload and ./files/output. These do not need to be named as such, and they can be located anywhere you like. You will just need to either provide the toload and output directories as arguments while running via command line,
You can provide the preferred location by:
In the files/output directory 3 subdirectories will be created:
Each one will hold the hdf5 files created with the data loaded by the 3 different loaders. The loaders can be run in parallel. Do not try and store more than one study at a time. This package does not support parallel study loading.
file_<efo_trait>.h5
will be created, under the bytrait
directory, one for each trait loaded, where the study groups will be stored (and the corresponding info/associations)file_<chromosome>.h5
will be created, under the bychr
directory, one for each chromosome, where bp block groups that blong to this chromosome will be stored (and the corresponding info/associations)file_<bp_step>.h5
will be created, under the bysnp
directory, one for each chromosome, where the variant groups that belong to this chromosome will be stored (and the corresponding info/associations)In the configuration file we have set the max bp location and the bp step that we want. Each study is split into chromosomes. Each chromosome sub-set is further split up into <bp_step>
pieces based on the range, so bp 0 to max_bp with step bp_range, where bp_range = max_bp / bp_step
.
So we loop through the chromosome for (default) 16 ranges of base pair locations, and create separate files for each chromosome. They are then loaded in the corresponding bysnp/<chr>/file_<bp_step>.h5
file.
Once the package is installed you can load studies and search for studies using the command line toolkit
To load a study it is suggested that you first run the bin/setup_configuration.sh
You can then run the below commands to fully load the study in all the formats:
gwas-load -tsv chr_<x>_<filename> -study <study> -chr <x> -loader chr
gwas-load -tsv chr_<x>_<filename> -study <study> -chr <x> -loader snp
gwas-load -tsv <filename> -study <study> -trait <trait> -loader trait
Assumtion:
The script will assume that the tsv file is stored under ./files/toload
and that the output direcories will be found under the ./files/output
directory (when mounted to docker as shown above, the volumes are placed in those positions)
If you need to specify the location where it resides, modify the properties.json file and set the environmental variable as: export SS_CONFIG=<path to json config>
Note that the loading command for chr and snp loaders need to be run for all the available chromosomes in the study.
To explore the contents of the database you can use the following commands:
gwas-explore -traits
will list the available traitsgwas-explore -studies
will list all the available studies and their corresponding traits gwas-explore -study <study>
will list the study name and it's corresponding traitNote that, if the output
directory is set by default to ./files/output
in the properties file. If you need to specify the location where it resides, modify the properties.json file and use the -config <path to properties.json>
flag to specify it.
To actually retrieve data from the database you can use the following commands:
gwas-search -all
will retrieve all the data from all the studies that are storedgwas-search -trait <trait>
will retrieve all the data for that traitgwas-search -trait <trait> -study <study>
will retrieve all the data for that trait/study combinationgwas-search -study <study>
will retrieve all the data for that studygwas-search -chr <chr>
will retrieve all the data for that specific chromosomegwas-search -snp <rsid>
will retrieve all the data for that specific snpgwas-search -snp <rsid> -chr <chr>
will retrieve all the data for that specific snp and it will search for it under the chromosome givenThe data will by default be retrieved in batches of 20 snps and their info per query. You can loop through the data using the default size of 20 and updating the start flag as you go: -start <start>
, or use the flags -start <start> -size <size>
to specify the bandwith of your retrieval.
There are two more flags that you can use:
-bp floor:ceil
e.g. -bp 10000:20000000
that specifies the range of the base pair location in the chromosome that you want. Makes sense to use when querying for a chromosome, or a trait/study-pval floor:ceil
e.g. -pval 2e-10:5e-5
or -pval 0.000003:4e-3
that specifies the p-value range of the results. Note that, if the output
directory is set by default to ./files/output
in the properties file. If you need to specify the location where it resides, modify the properties.json file and use the -config <path to properties.json>
flag to specify it.
To expose the API you need to run: gunicorn -b <host:port> --chdir sumstats/server --access-logfile <path to access log file> --error-logfile <path to error log file> app:app [--log-level <log level>]
You can set the environmental variable as: export SS_CONFIG=<path to json config>
to change the default properties, such as the directory where all the data is stored (output directory) as explained in all the above sections.
This will spin up the service and make it available on port 8080 (if running via docker, we exposed the port when we spinned up the container).
You should be able to see the API on http://localhost:8080/