The LinkExplorer is a web-based tool for exploring nodes and relations of link prediction benchmark datasets and explanations of predictions done with the rule-based approach SAFRAN.
Included are three biomedical knowledge bases:
and two general-domain benchmarks:
Paper preprint on bioRxiv • Peer reviewed paper in the journal Bioinformatics (for citations) • Supplementary data • Citation (bibTex)
A screenshot tutorial of the main functionalities can be found at https://openbiolink.github.io/Explorer/
LinkExplorer retrieves metadata of benchmark datasets from RDF graphs, which provides labels, descriptions of nodes and relations in the dataset, as well as the edges of the dataset. You can extend LinkExplorer with your own dataset metadata graph by specifying the endpoint of your RDF graph after clicking on the button 'Load custom endpoint' in the dataset loading screen (/loader
). Further information about the schema of the RDF graphs can be found here.
If you want to make your RDF graph publicly available, please contact us!
You can load any explanation file produced with SAFRAN, by clicking on 'Load local explanation' and selecting the generated .db
file. Metadata graphs of custom explanation files can be either 'None', a public graph (From the dropdown) or a custom graph (see here).
The LinkExplorer application can be run with Docker. We divide our application into three containers:
The LinkExplorer application is orchestrated through Docker Compose and can be run following these steps:
server/db
(Db files of our graphs are hosted here)/blazegraph/setup.py
, the actual graphs are hosted here)server/db
docker-compose up --build
from the Explorer
folderThat's it! A full example of running YAGO3-10 can be found here.
Now the LinkExplorer app is accessible via http://localhost:5000, while Blazegraph is accessible via http://localhost:9999. Explanation files (sqlite) and the index.json
should be added to /server/db
. Important: The host of all SPARQL endpoints that are running in the docker blazegraph container should be blazegraph
, f.e.
{
"Dataset": [
{
"ID": "wn18rr",
"Endpoint": "http://blazegraph:9999/blazegraph/namespace/wn18rr/sparql",
"Name": "WN18RR",
"Version": "",
"Description": "WN18RR is a link prediction dataset created from WN18, ...",
"Explanation": [
{
"ID": "max",
...
Clone repository
Start server by running (Starts backend on port 3001)
cd server
npm install
npm run start
cd client
npm install
npm run build
and host the static build with a web server. All /rpc
calls have to be proxied to the backend node server.
Example with https://github.com/lwsjs/local-web-server
The following command hosts LinkExplorer client at port 5000 and proxies all /rpc
calls to the backend.
ws --port 5000 --directory build --spa index.html --rewrite '/rpc -> http://localhost:3001/rpc'
Generally you can use whatever graph database you want that supports RDF*/SPARQL*. We included a blazegraph, which can be started by running
cd server/blazegraph
python start.py
Further information can be found at https://blazegraph.com.
Optional If you want to populate the blazegraph server with the datasets mentioned above run python setup.py
with a running blazegraph instance.
The central file which stores names, endpoints, explanations, ... of benchmarks hosted on the server is index.json
which sould be stored under /server/db
. A template can be found here.
Explanation files generated by SAFRAN (.db) should be stored in /server/db
and should adhere to the following naming convention: {ID of dataset}_{ID of explanation}.db
. F.e. the explanation file for WN18RR and Maximum aggregation should be wn18rr_max.db
if the entry in the index.json
is as follows
{
"Dataset": [
{
"ID": "wn18rr",
"Endpoint": "http://explore.ai-strategies.org:9999/blazegraph/namespace/wn18rr/sparql",
"Name": "WN18RR",
"Version": "",
"Description": "WN18RR is a link prediction dataset created from WN18, ...",
"Explanation": [
{
"ID": "max",
"Label": "MAX",
"Date": 1624529144,
"Comment": "These results were retrieved by applying the MaxPlus (AnyBURL default) aggregation ...",
"Method": "max",
"RuleConfig": "SNAPSHOTS_AT = 1000 ...",
"ClusteringConfig": ""
},
...
@article{10.1093/bioinformatics/btac068,
author = {Ott, Simon and Barbosa-Silva, Adriano and Samwald, Matthias},
title = "{LinkExplorer: Predicting, explaining and exploring links in large biomedical knowledge graphs}",
journal = {Bioinformatics},
year = {2022},
month = {02},
abstract = "{Machine learning algorithms for link prediction can be valuable tools for hypothesis generation. However, many current algorithms are black boxes or lack good user interfaces that could facilitate insight into why predictions are made. We present LinkExplorer, a software suite for predicting, explaining and exploring links in large biomedical knowledge graphs. LinkExplorer integrates our novel, rule-based link prediction engine SAFRAN, which was recently shown to outcompete other explainable algorithms and established black box algorithms. Here, we demonstrate highly competitive evaluation results of our algorithm on multiple large biomedical knowledge graphs, and release a web interface that allows for interactive and intuitive exploration of predicted links and their explanations.A publicly hosted instance, source code and further documentation can be found at https://github.com/OpenBioLink/Explorer.Supplementary data are available at Bioinformatics online.}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btac068},
url = {https://doi.org/10.1093/bioinformatics/btac068},
note = {btac068},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btac068/42447077/btac068.pdf},
}
This project received funding from netidee.