TheScienceMuseum / elastic-wikidata

CLI for loading Wikidata subsets (or all of it) into Elasticsearch
https://www.sciencemuseumgroup.org.uk/project/heritage-connector/
MIT License
67 stars 7 forks source link
elasticsearch wikidata

Elastic Wikidata

Simple CLI tools to load a subset of Wikidata into Elasticsearch. Part of the Heritage Connector project.


PyPI - Downloads GitHub last commit GitHub Pipenv locked Python version

Why?

Running text search programmatically on Wikidata means using the MediaWiki query API, either directly or through the Wikidata query service/SPARQL.

There are a couple of reasons you may not want to do this when running searches programmatically:

* CirrusSearch is a Wikidata extension that enables direct search on Wikidata using Elasticsearch, if you require powerful search and are happy with the rate limit.

Installation

from pypi: pip install elastic_wikidata

from repo:

  1. Download
  2. cd into root
  3. pip install -e .

Setup

elastic-wikidata needs the Elasticsearch credentials ELASTICSEARCH_CLUSTER, ELASTICSEARCH_USER and ELASTICSEARCH_PASSWORD to connect to your ES instance. You can set these in one of three ways:

  1. Using environment variables: export ELASTICSEARCH_CLUSTER=https://... etc
  2. Using config.ini: pass the -c parameter followed by a path to an ini file containing your Elasticsearch credentials. Example here.
  3. Pass each variable in at runtime using options --cluster/-c, --user/-u, --password/-p.

Usage

Once installed the package is accessible through the keyword ew. A call is structured as follows:

ew <task> <options>

Task is either:

A full list of options can be found with ew --help, but the following are likely to be useful:

Loading from Wikidata dump (.ndjson)

ew dump -p <path_to_json> <other_options>

This is useful if you want to create one or more large subsets of Wikidata in different Elasticsearch indexes (millions of entities).

Time estimate: Loading all ~8million humans into an AWS Elasticsearch index took me about 20 minutes. Creating the humans subset using wikibase-dump-filter took about 3 hours using its instructions for parallelising.

  1. Download the complete Wikidata dump (latest-all.json.gz from here). This is a large file: 87GB on 07/2020.
  2. Use maxlath's wikibase-dump-filter to create a subset of the Wikidata dump. Note: don't use the --simplify flag when running the dump. elastic-wikidata will take care of simplification.
  3. Run ew dump with flag -p pointing to the JSON subset. You might want to test it with a limit (using the -l flag) first.

Loading from SPARQL query

ew query -p <path_to_sparql_query> <other_options>

For smaller collections of Wikidata entities it might be easier to populate an Elasticsearch index directly from a SPARQL query rather than downloading the whole Wikidata dump to take a subset. ew query automatically paginates SPARQL queries so that a heavy query like 'return all the humans' doesn't result in a timeout error.

Time estimate: Loading 10,000 entities into Wikidata into an AWS hosted Elasticsearch index took me about 6 minutes.

  1. Write a SPARQL query and save it to a text/.rq file. See example.
  2. Run ew query with the -p option pointing to the file containing the SPARQL query. Optionally add a --page_size for the SPARQL query.

Temporary side effects

As of version 0.3.1 refreshing the search index is disabled for the duration of load by default, as recommended by ElasticSearch. Refresh is re-enabled to the default interval of 1s after load is complete. To disable this behaviour use the flag --no_disable_refresh/-ndr.