Simple CLI tools to load a subset of Wikidata into Elasticsearch. Part of the Heritage Connector project.
Running text search programmatically on Wikidata means using the MediaWiki query API, either directly or through the Wikidata query service/SPARQL.
There are a couple of reasons you may not want to do this when running searches programmatically:
* CirrusSearch is a Wikidata extension that enables direct search on Wikidata using Elasticsearch, if you require powerful search and are happy with the rate limit.
from pypi: pip install elastic_wikidata
from repo:
cd
into rootpip install -e .
elastic-wikidata needs the Elasticsearch credentials ELASTICSEARCH_CLUSTER
, ELASTICSEARCH_USER
and ELASTICSEARCH_PASSWORD
to connect to your ES instance. You can set these in one of three ways:
export ELASTICSEARCH_CLUSTER=https://...
etc-c
parameter followed by a path to an ini file containing your Elasticsearch credentials. Example here.--cluster/-c
, --user/-u
, --password/-p
.Once installed the package is accessible through the keyword ew
. A call is structured as follows:
ew <task> <options>
Task is either:
dump
: load data from Wikidata JSON dump, orquery
: load data from SPARQL query.A full list of options can be found with ew --help
, but the following are likely to be useful:
--index/-i
: the index name to push to. If not specified at runtime, elastic-wikidata will prompt for it--limit/-l
: limit the number of records pushed into ES. You might want to use this for a small trial run before importing the whole thing.--properties/-prop
: a whitespace-separated list of properties to include in the ES index e.g. 'p31 p21', or the path to a text file containing newline-separated properties e.g. this one.--language/-lang
: Wikimedia language code. Only one supported at this time.ew dump -p <path_to_json> <other_options>
This is useful if you want to create one or more large subsets of Wikidata in different Elasticsearch indexes (millions of entities).
Time estimate: Loading all ~8million humans into an AWS Elasticsearch index took me about 20 minutes. Creating the humans subset using wikibase-dump-filter
took about 3 hours using its instructions for parallelising.
--simplify
flag when running the dump. elastic-wikidata will take care of simplification.ew dump
with flag -p
pointing to the JSON subset. You might want to test it with a limit (using the -l
flag) first.ew query -p <path_to_sparql_query> <other_options>
For smaller collections of Wikidata entities it might be easier to populate an Elasticsearch index directly from a SPARQL query rather than downloading the whole Wikidata dump to take a subset. ew query
automatically paginates SPARQL queries so that a heavy query like 'return all the humans' doesn't result in a timeout error.
Time estimate: Loading 10,000 entities into Wikidata into an AWS hosted Elasticsearch index took me about 6 minutes.
ew query
with the -p
option pointing to the file containing the SPARQL query. Optionally add a --page_size
for the SPARQL query.As of version 0.3.1 refreshing the search index is disabled for the duration of load by default, as recommended by ElasticSearch. Refresh is re-enabled to the default interval of 1s
after load is complete. To disable this behaviour use the flag --no_disable_refresh/-ndr
.