the-qa-company / qEndpoint

A highly scalable RDF triple store with full-text and GeoSPARQL support
https://the-qa-company.com/products/qEndpoint
Other
79 stars 9 forks source link
full-text-search geosparql graph-database java rdf semantic-web sparql triple-store wikidata

Logo

qEndpoint
Report a Bug · Request a Feature · Ask a Question

[![Package build and deploy](https://github.com/the-qa-company/qEndpoint/actions/workflows/package-build.yml/badge.svg)](https://github.com/the-qa-company/qEndpoint/actions/workflows/package-build.yml) [![Tests](https://github.com/the-qa-company/qEndpoint/actions/workflows/test.yml/badge.svg)](https://github.com/the-qa-company/qEndpoint/actions/workflows/test.yml) **dev** [![Tests](https://github.com/the-qa-company/qEndpoint/actions/workflows/test.yml/badge.svg?branch=dev)](https://github.com/the-qa-company/qEndpoint/actions/workflows/test.yml) The QA Company over the social networks ---
Table of Contents - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Scoop](#scoop) - [Brew](#brew) - [Command Line Interface](#command-line-interface) - [Code](#code) - [Back-end](#back-end) - [Front-end](#front-end) - [Installers](#installers) - [Usage](#usage) - [Docker Image](#docker-image) - [`qacompany/qendpoint`](#qacompanyqendpoint) - [`qacompany/qendpoint-wikidata`](#qacompanyqendpoint-wikidata) - [Useful tools](#useful-tools) - [Standalone](#standalone) - [As a dependency](#as-a-dependency) - [Connecting with your Wikibase](#connecting-with-your-wikibase) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors \& contributors](#authors--contributors) - [Security](#security) - [Publications](#publications) - [License](#license)

About

The qEndpoint is a highly scalable triple store with full-text and GeoSPARQL support. It can be used as a standalone SPARQL endpoint, or as a dependency. The qEndpoint is for example used in Kohesio where each interaction with the UI corresponds to an underlying SPARQL query on the qEndpoint. Also qEndpoint is part of QAnswer enabeling question answering over RDF Graphs.

Built With


Getting Started

Prerequisites

For the backend/benchmark

For the frontend (not mandatory to run the backend)

Installation

Scoop

You can install qEndpoint using the Scoop package manager.

You need to add the the-qa-company bucket, and then you will be able to install the qendpoint manifest, it can be done using these commands

# Add the-qa-company bucket
scoop bucket add the-qa-company https://github.com/the-qa-company/scoop-bucket.git
# Install qEndpoint CLI
scoop install qendpoint

Brew

You can install qEndpoint using the Brew package manager.

You can install is using this command

brew install the-qa-company/tap/qendpoint

Command Line Interface

If you don't have access to Brew or Scoop, the qEndpoint command line interface is available in the releases page under the file qendpoint-cli.zip. By extracting it, you can a bin directory that can be added to your path.

Code

Back-end
<dependency>
    <groupId>com.the_qa_company</groupId>
    <artifactId>qendpoint</artifactId>
    <version>1.2.3</version>
</dependency>
Front-end

Installers

The endpoint installers for Linux, MacOS and Windows can be found here, the installers do not contain the command line (cli), only the endpoint.


Usage

Docker Image

You can use one of our preconfigured Docker images.

qacompany/qendpoint

DockerHub: qacompany/qendpoint

This Docker image contains the endpoint, you can upload your dataset and start using it.

You just have to run the image and it will prepare the environment by downloading the index and setting up the repository using the snippet below:

docker run -p 1234:1234 --name qendpoint qacompany/qendpoint

You can also specify the size of the memory allocated by setting the docker environnement value _MEMSIZE. By default this value is set to 6G. You should not set this value below 4G because you will certainly run out of memory with large dataset. For bigger dataset, a bigger value is also recommended for big dataset, as an example, Wikidata-all won't run without at least 10G.

docker run -p 1234:1234 --name qendpoint --env MEM_SIZE=6G qacompany/qendpoint

You can stop the container and rerun it at anytime maintaining the data inside (qendpoint is the name of the container) using the following commands:

docker stop qendpoint
docker start qendpoint

: Note this container may occupy a huge portion of the disk due to the size of the data index, so make sure to delete the container if you don't need it anymore by using the command below:

docker rm qendpoint

qacompany/qendpoint-wikidata

DockerHub: qacompany/qendpoint-wikidata

This Docker image contains the endpoint with a script to download an index containing the Wikidata Truthy statements from our servers, so you simply have to wait for the index download and start using it.

You just have to run the image and it will prepare the environment by downloading the index and setting up the repository using the code below:

docker run -p 1234:1234 --name qendpoint-wikidata qacompany/qendpoint-wikidata

You can also specify the size of the memory allocated by setting the docker environnement value _MEMSIZE. By default this value is set to 6G, a bigger value is also recommended for big dataset, as an example, Wikidata-all won't run without at least 10G.

docker run -p 1234:1234 --name qendpoint-wikidata --env MEM_SIZE=6G qacompany/qendpoint-wikidata

You can specify the dataset to download using the environnement value _HDTBASE, by default the value is wikidata_truthy, but the current available values are:

docker run -p 1234:1234 --name qendpoint-wikidata --env MEM_SIZE=10G --env HDT_BASE=wikidata_all qacompany/qendpoint-wikidata

You can stop the container and rerun it at anytime maintaining the data inside (qendpoint is the name of the container) using the below code:

docker stop qendpoint-wikidata
docker start qendpoint-wikidata

Note this container may occupy a huge portion of the disk due to the size of the data index, so make sure to delete the container if you don't need it anymore using the command as shown below:

docker rm qendpoint-wikidata

Useful tools

You can access http://localhost:1234 where there is a GUI where you can write SPARQL queries and execute them, and there is the RESTful API available which you can use to run queries from any application over HTTP like so:

curl -H 'Accept: application/sparql-results+json' localhost:1234/api/endpoint/sparql --data-urlencode 'query=select * where{ ?s ?p ?o } limit 10'

Note first query will take some time in order to map the index to memory, later on it will be much faster!

Most of the result formats are available, you can use for example:

Standalone

You can run the endpoint with this command:

java -jar endpoint.jar &

you can find a template of the application.properties file in the backend source

If you have the HDT file of your graph, you can put it before loading the endpoint in the hdt-store directory (by default hdt-store/index_dev.hdt)

If you don't have the HDT, you can upload the dataset to the endpoint by running the command while the endpoint is running:

curl "http://127.0.0.1:1234/api/endpoint/load" -F "file=@mydataset.nt"

where mydataset.nt is the RDF file to load, you can use all the formats used by RDF4J.

As a dependency

You can create a SPARQL repository using this method, don't forget to init the repository

// Create a SPARQL repository
SparqlRepository repository = CompiledSail.compiler().compileToSparqlRepository();
// Init the repository
repository.init();

You can execute SPARQL queries using the executeTupleQuery, executeBooleanQuery, executeGraphQuery or execute.

// execute the a tuple query
try (ClosableResult<TupleQueryResult> execute = sparqlRepository.executeTupleQuery(
        // the sparql query
        "SELECT * WHERE { ?s ?p ?o }",
        // the timeout
        10
)) {
    // get the result, no need to close it, closing execute will close the result
    TupleQueryResult result = execute.getResult();

    // the tuples
    for (BindingSet set : result) {
        System.out.println("Subject:   " + set.getValue("s"));
        System.out.println("Predicate: " + set.getValue("p"));
        System.out.println("Object:    " + set.getValue("o"));
    }
}

Don't forget to shutdown the repository after usage

// Shutdown the repository (better to release resources)
repository.shutDown();

You can get the RDF4J repository with the getRepository() method.

// get the rdf4j repository (if required)
SailRepository rdf4jRepo = repository.getRepository();

Connecting with your Wikibase


Roadmap

See the open issues for a list of proposed features (and known issues).


Support

Reach out to the maintainer at one of the following places:


Project assistance

If you want to say thank you or/and support active development of qEndpoint:


Contributing

First of all, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are greatly appreciated.

Please read our contribution guidelines, and thank you for being involved!


Authors & contributors

The original setup of this repository is by The QA Company.

For a full list of all authors and contributors, see the contributors page.


Security

qEndpoint follows good practices of security, but 100% security cannot be assured. qEndpoint is provided "as is" without any warranty. Use at your own risk.

For more information and to report security issues, please refer to our security documentation.


Publications


License

This project is licensed under the GNU General Public License v3 with a notice.

See LICENSE for more information.


Let's Connect

LinkedIn Badge The QA Company Web Twitter Badge Email Badge