OLS4 is available at https://www.ebi.ac.uk/ols4/. Please report any issues to the tracker in this repository.
Version 4 of the EMBL-EBI Ontology Lookup Service (OLS), featuring:
This repository contains three projects:
dataload
directory)backend
directory)frontend
directory)If you want to try OLS4 out, this should get you going:
export OLS4_CONFIG=./dataload/configs/efo.json
docker compose up
You should now be able to access the OLS4 frontend at http://localhost:8081
.
If you need to set the heap size, you can do so using:
JAVA_OPTS="-Xms5G -Xmx25G" docker compose up
If you want to test it with your own ontology, copy the OWL or RDFS ontology file to the testcases
folder (which is
mounted in Docker). Then make a new config file for your ontology in dataload/configs
(you can use efo.json
as a
template). For the ontology_purl
property in the config, use e.g. file:///opt/dataload/testcases/myontology.owl
if
your ontology is in testcases/myontology.owl
. Then follow the above steps for efo with the config filename you
created.
To deploy OLS4 using Kubernetes, Docker images built and uploaded to this repository (using GitHub Packages) are utilized. Software requirements are as follows:
To create your own Solr and Neo4j data archives, follow the steps on how to load data locally.
Uninstall existing dataserver
deployments, if any, before installing a new one. Do not forget to set KUBECONFIG
environment variable.
export KUBECONFIG=<K8S_CONFIG>
helm install ols4-dataserver --wait <OLS4_DIR>/k8chart/dataserver
From your local directory, copy the Solr and Neo4j data archive files to the dataserver
.
kubectl cp <LOCAL_DIR>/neo4j.tgz $(/srv/data/k8s/kubectl get pods -l app=ols4-dataserver -o custom-columns=:metadata.name):/usr/share/nginx/html/neo4j.tgz
kubectl cp <LOCAL_DIR>/solr.tgz $(/srv/data/k8s/kubectl get pods -l app=ols4-dataserver -o custom-columns=:metadata.name):/usr/share/nginx/html/solr.tgz
Uninstall existing ols4
deployments, if any, before installing a new one. Do not forget to set KUBECONFIG
environment variable.
IMPORTANT: The use of imageTag
is to specify the Docker image (uploaded to this repository) that will be used in the deployment. If not familiar, simply
use either the dev
or stable
image.
export KUBECONFIG=<K8S_CONFIG>
helm install ols4 <OLS4_DIR>/k8chart/ols4 --set imageTag=dev
OLS is different to most webapps in that its API provides both full text search and recursive graph queries, neither of which are possible and/or performant using traditional RDBMS. It therefore uses two specialized database servers: Solr, a Lucene server similar to ElasticSearch; and Neo4j, a graph database.
dataload
directory contains the code which turns ontologies from RDF (specified using OWL and/or RDFS) into JSON
and CSV datasets which can be loaded into Solr and Neo4j, respectively; and some minimal bash scripts which help with
loading them.backend
directory contains a Spring Boot application which hosts the OLS API over the above Solr and Neo4j
instancesfrontend
directory contains the React frontend built upon the backend
above.You can run OLS4, or any combination of its consistuent parts (dataload, backend, frontend) in Docker. When developing, it is often useful to run, for example, just Solr and Neo4j in Docker, while running the API server locally; or to run Solr, Neo4j, and the backend API server in Docker while running the frontend locally.
First install the latest version of Docker Desktop if you are on Mac or Windows. This now includes the docker compose
command. If you are on Linux, make sure you have the docker compose
plugin
installed (apt install docker.io docker-compose-plugin
on Ubuntu).
You will need a config file, which configures the ontologies to load into OLS4. You can provide this to docker compose
using the OLS4_CONFIG
environment variable. For example:
export OLS4_CONFIG=./dataload/configs/efo.json
Then, start up the components you would like to run. For example, Solr and Neo4j only (to develop the backend API server and/or frontend):
docker compose up --force-recreate --build --always-recreate-deps --attach-dependencies ols4-solr ols4-neo4j
This will build and run the dataload, and start up Solr and Neo4j with your new dataset on ports 8983 and 7474, respectively. To start Solr and Neo4j AND the backend API server (to develop the frontend):
docker compose up --force-recreate --build --always-recreate-deps --attach-dependencies ols4-solr ols4-neo4j ols4-backend
To start everything, including the frontend:
docker compose up --force-recreate --build --always-recreate-deps --attach-dependencies ols4-solr ols4-neo4j ols4-backend ols4-frontend
Alternatively, you can run OLS4 or any of its constituent parts locally, which is more useful for development. Software requirements are as follows:
Clone repo:
git clone git@github.com:EBISPOT/ols4.git
Build backend:
mvn clean package
Build frontend:
npm install
The scripts below assume you have the following environment variables set:
NEO4J_HOME
SOLR_HOME
OLS4_HOME
- this should point to the root folder where you have the OLS4 code.
Change the directory to $OLS4_HOME.
cd $OLS4_HOME
To load a testcase and start Neo4J and Solr, run:
./dev-testing/teststack.sh <rel_json_config_url> <rel_output_dir>
where <rel_json_config_url>
can be a JSON config file or a directory with JSON file, and <rel_outdir>
the output directory, both relative from $OLS4_HOME, i.e.:
./dev-testing/teststack.sh ./testcases/owl2-primer/minimal.json ./output
or if you want to load all testcases, you can use
./dev-testing/teststack.sh ./testcases ./output
If you need to set the Java heap size, you can set the environment the JAVA_OPTS variable as follows:
export JAVA_OPTS="-Xms5G -Xmx10G"
Once Neo4J and Solr is up, to start the backend (REST API) you can run:
./dev-testing/start-backend.sh
Once the backend is up, you can start the frontend with:
./dev-testing/start-frontend.sh
Once you are done testing, to stop everything:
./stopNeo4JSolr.sh
All related files for loading and processing data are in dataload
.
First, make sure the configuration files (that determine which ontologies to load) are ready and to build all the JAR files:
cd dataload
mvn clean package
java \
-DentityExpansionLimit=0 \
-DtotalEntitySizeLimit=0 \
-Djdk.xml.totalEntitySizeLimit=0 \
-Djdk.xml.entityExpansionLimit=0 \
-jar predownloader.jar \
--config <CONFIG_FILE> \
--downloadPath <DOWNLOAD_PATH>
java \
-DentityExpansionLimit=0 \
-DtotalEntitySizeLimit=0 \
-Djdk.xml.totalEntitySizeLimit=0 \
-Djdk.xml.entityExpansionLimit=0 \
-jar rdf2json.jar \
--downloadedPath <DOWNLOAD_PATH> \
--config <CONFIG_FILE> \
--output <LOCAL_DIR>/output_json/ontologies.json
java \
-jar linker.jar \
--input <LOCAL_DIR>/output_json/ontologies.json \
--output <LOCAL_DIR>/output_json/ontologies_linked.json \
--leveldbPath <LEVEL_DB_DIR>
java \
-jar json2neo.jar \
--input <LOCAL_DIR>/output_json/ontologies_linked.json \
--outDir <LOCAL_DIR>/output_csv/
Run Neo4j import
command:
./neo4j-admin import \
--ignore-empty-strings=true \
--legacy-style-quoting=false \
--array-delimiter="|" \
--multiline-fields=true \
--database=neo4j \
--read-buffer-size=134217728 \
$(<LOCAL_DIR>/make_csv_import_cmd.sh)
Here is a sample make_csv_import_cmd.sh
file:
for f in ./output_csv/*_ontologies.csv
do
echo -n "--nodes=$f "
done
for f in ./output_csv/*_classes.csv
do
echo -n "--nodes=$f "
done
for f in ./output_csv/*_properties.csv
do
echo -n "--nodes=$f "
done
for f in ./output_csv/*_individuals.csv
do
echo -n "--nodes=$f "
done
for f in ./output_csv/*_edges.csv
do
echo -n "--relationships=$f "
done
Start Neo4j locally and then run the sample database commands, which are also defined in create_indexes.cypher
inside the dataload
directory:
CREATE INDEX FOR (n:OntologyClass) ON n.id;
CREATE INDEX FOR (n:OntologyIndividual) ON n.id;
CREATE INDEX FOR (n:OntologyProperty) ON n.id;
CREATE INDEX FOR (n:OntologyEntity) ON n.id;
CALL db.awaitIndexes(10800);
After creating the indexes, stop Neo4j as needed.
java \
-jar json2solr.jar \
--input <LOCAL_DIR>/output_json/ontologies_linked.json \
--outDir <LOCAL_DIR>/output_jsonl/
Before running Solr, make sure to copy the configuration (solr_config
) from inside dataload
directory to local, e.g., <SOLR_DIR>/server/solr/
.
Then, start Solr locally and use the generated JSON files to update. See sample commands below:
wget \
--method POST --no-proxy -O - --server-response --content-on-error=on \
--header="Content-Type: application/json" \
--body-file <LOCAL_DIR>/output_jsonl/ontologies.jsonl \
http://localhost:8983/solr/ols4_entities/update/json/docs?commit=true
wget \
--method POST --no-proxy -O - --server-response --content-on-error=on \
--header="Content-Type: application/json" \
--body-file <LOCAL_DIR>/output_jsonl/classes.jsonl \
http://localhost:8983/solr/ols4_entities/update/json/docs?commit=true
wget --method POST --no-proxy -O - --server-response --content-on-error=on \
--header="Content-Type: application/json" \
--body-file <LOCAL_DIR>/output_jsonl/properties.jsonl \
http://localhost:8983/solr/ols4_entities/update/json/docs?commit=true
wget --method POST --no-proxy -O - --server-response --content-on-error=on \
--header="Content-Type: application/json" \
--body-file <LOCAL_DIR>/output_jsonl/individuals.jsonl \
http://localhost:8983/solr/ols4_entities/update/json/docs?commit=true
wget --method POST --no-proxy -O - --server-response --content-on-error=on \
--header="Content-Type: application/json" \
--body-file <LOCAL_DIR>/output_jsonl/autocomplete.jsonl \
http://localhost:8983/solr/ols4_autocomplete/update/json/docs?commit=true
Update ols4_entities
core:
wget --no-proxy -O - --server-response --content-on-error=on \
http://localhost:8983/solr/ols4_entities/update?commit=true
Update ols4_autocomplete
core:
wget --no-proxy -O - --server-response --content-on-error=on \
http://localhost:8983/solr/ols4_autocomplete/update?commit=true
After updating the indexes, stop Solr as needed.
Finally, create archives for both Solr and Neo4j data folders.
tar --use-compress-program="pigz --fast --recursive" \
-cf <LOCAL_DIR>/neo4j.tgz -C <LOCAL_DIR>/neo4j/data .
tar --use-compress-program="pigz --fast --recursive" \
-cf <LOCAL_DIR>/solr.tgz -C <LOCAL_DIR>/solr/server solr
The API server Spring Boot application located in backend
. Set the following environment variables to point it at your
local (Dockerized) Solr and Neo4j servers:
OLS_SOLR_HOST=http://localhost:8983
OLS_NEO4J_HOST=bolt://localhost:7687
The frontend is a React application in frontend
. See frontend docs
for details on how to run the frontend.
testcases_expected_output
and testcases_expected_output_api
If you make changes to the data load or API of OLS, you need to run testcases and compare it against the expected outputs to ensure backward compatibility. This testing consists of
These tests are run locally as described in Test testcases from dataload to UI.
Ensure that the environment variables NEO4J_HOME
, SOLR_HOME
and OLS4_HOME
are set up accordingly.
Before running your testcases, ensure that your work is already commited. Create a new branch based on the branch you worked on
but with a -testcases
suffix. I.e., if your branch is called "fix-xyz", the new branch for the testcases will be
fix-xyz-testcases
. We commit testcases to a separate branch due to the large number of files updated when testcases are run.
First make sure all the OLS4 JARs are up to date by running :
mvn clean package
Generate new output files and import into Neo4J and Solr:
./dev-testing/teststack.sh ./testcases ./testcases_output
Compare /testcases_output
with /testcases_expected_output
:
./compare_testcase_output.sh
The output of step 3 is written to testcases_compare_result.log
. If no differences are found, this file will be empty.
testcases_compare_result.log
will only tell you which files are different. To see the actual differences in dicated in
testcases_compare_result.log
, compare files that are stated to be different in a visual editor like Meld. Ensure that
all differences in this file can be explained and that they do make sense.
Once you are happy with the output in testcases_output
, remove the old testcases_expected_output
and replace with
new expected output:
rm -rf testcases_expected_output cp -r testcases_output/testcases testcases_expected_output
Add updated expected output to git.
git add -A testcases_expected_output
Commit the updates to testcases to a branch with suffix -testcases
and message "TESTCASES updated".
Now continue with API testing.
Before doing API testing you must have completed the dataload testing.
Before running the API tests, create a new branch with suffix api-tests
. I.e., if the branch you worked on was
fix-xyz-api-tests
.
Start the backend:
./dev-testing/start-backend.sh
Run API tests against backend using:
./test_api_fast.sh http://localhost:8080 ./testcases_output_api ./testcases_expected_output_api --deep
The results of step 8 is written to ./apitester4.log
. Differences are written to the end of the file. When there are no
differences, this file will end with these lines:
RecursiveJsonDiff.diff() reported success
apitester reported success; exit code 0
Ensure that all differences listed in ./apitester4.log
are accounted for. Once you are happy with the output, remove
the old testcases_expected_output_api
and replace with new expected output:
rm -rf testcases_expected_output_api
cp -r testcases_output_api testcases_expected_output_api
Add the latest expected outputs to Git:
git add -A testcases_expected_output_api
Commit the API tests to a branch with suffix -api-tests
and message "API-TESTS updated".
You can stop the OLS4 backend with "Ctrl-C", and Solr and Neo4J with:
./dev-testing/stopNeo4JSolr.sh