Closed akifaktas closed 1 month ago
The ERROR: Elasticsearch index pelias does not exist
error message is concerning, did you run pelias elastic create
?
You must use the pelias-schema tool (https://github.com/pelias/schema/) to create the index first
The contents of my .sh file are as follows. So yes, actually, the Elasticsearch command had worked.
set -x
# change directory to the where you would like to install Pelias
# cd /path/to/install
# clone this repository
git clone https://github.com/pelias/docker.git && cd docker
# install pelias script
# this is the _only_ setup command that should require `sudo`
sudo ln -s "$(pwd)/pelias" /usr/local/bin/pelias
# cd into the project directory
cd projects/planet
# create a directory to store Pelias data files
# see: https://github.com/pelias/docker#variable-data_dir
# note: use 'gsed' instead of 'sed' on a Mac
mkdir ./data
sed -i '/DATA_DIR/d' .env
echo 'DATA_DIR=./data' >> .env
# run build
pelias compose pull
pelias elastic start
pelias elastic wait
pelias elastic create
pelias download all
pelias prepare all
pelias import all
pelias compose up
# optionally run tests
pelias test run
For bash scripts I'd recommend set -euxo pipefail
which will exit on failure, just setting -x
will not terminate if any command exits a non-zero status code.
That said, it very well might have succeeded, do you have the logs produced with -x
?
Have a look inside your data dir ./data
to see which directories are contained and their relative sizes. I would expect the elasticsearch
directory to be quite large with many objects.
If you failed to import the documents to elasticsearch then all is not lost, it sounds like you've already done all the lengthly prepare
steps, so you won't need to do them again.
try running these commands to check the status of the elasticsearch index:
pelias elastic start
pelias elastic wait
pelias elastic status
pelias elastic info
pelias elastic stats
After you mentioned it, I checked and there was a situation like this. How can I overcome this?
pelias elastic stats
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "pelias",
"node" : "bT2YMU7CT2y2zc2qt9e32A",
"reason" : {
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
],
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source] in order to load field data by uninverting the inverted index. Note that this can use significant memory.",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
},
"status" : 400
Hi everyone. I have just joined the group. Pelias seemed useful for me, so I tried to set it up for the entire planet. I installed it on a virtual machine with 38 CPUs (118 MHz), 150 GB RAM, and 900 GB disk space. The installation took about 3 days. After 3 days, I was seeing the following messages in the terminal:
It looks like a 338 GB installation was also made.
Afterwards, I made the following API call:
:4000/v1/autocomplete?text=Singapore
Unfortunately, the result I received was as follows::
{"geocoding":{"version":"0.2","attribution":"[http://myurl:4000/attribution","query":{"text":"Singapore","parser":"pelias","parsed_text":{"subject":"Singapore","locality":"Singapore"},"size":10,"layers":["venue","street","country","macroregion","region","county","localadmin","locality","borough","neighbourhood","continent","empire","dependency","macrocounty","macrohood","microhood","disputed","postalcode","ocean","marinearea"],"private":false,"lang":{"name":"Turkish","iso6391":"tr","iso6393":"tur","via":"header","defaulted":false},"querySize":20},"warnings":["performance](http://myurl:4000/attribution%22,%22query%22:%7B%22text%22:%22Singapore%22,%22parser%22:%22pelias%22,%22parsed_text%22:%7B%22subject%22:%22Singapore%22,%22locality%22:%22Singapore%22%7D,%22size%22:10,%22layers%22:[%22venue%22,%22street%22,%22country%22,%22macroregion%22,%22region%22,%22county%22,%22localadmin%22,%22locality%22,%22borough%22,%22neighbourhood%22,%22continent%22,%22empire%22,%22dependency%22,%22macrocounty%22,%22macrohood%22,%22microhood%22,%22disputed%22,%22postalcode%22,%22ocean%22,%22marinearea%22],%22private%22:false,%22lang%22:%7B%22name%22:%22Turkish%22,%22iso6391%22:%22tr%22,%22iso6393%22:%22tur%22,%22via%22:%22header%22,%22defaulted%22:false%7D,%22querySize%22:20%7D,%22warnings%22:[%22performance) optimization: excluding 'address' layer"],"engine":{"name":"Pelias","author":"Mapzen","version":"1.0"},"timestamp":1726155507342},"type":"FeatureCollection","features":[]}
So, the service is running, but it seems to return empty results. What could be causing this issue, and how can I resolve it? Thank you in advance for your support.