Closed Concurser closed 6 years ago
I did exclude the coordinator from the worker list:
{
"username": "qg-presto01",
"coordinator": "172.17.233.164",
"workers": ["172.17.233.165","172.17.233.166"],
"java8_home":"/usr/java/jdk1.8.0_162/"
}
But still getting the same error:
Fatal error: [172.17.233.164] discovery.uri should not be localhost in a multi-node cluster, but found http://localhost:8080. You may have encountered this error by choosing a coordinator that is localhost and a worker that is not. The default discovery-uri is http://<coordinator>:8080
This means that prestoadmin must be started from outside the cluster? I am running it from the coordinator.
This means that prestoadmin must be started from outside the cluster? I am running it from the coordinator.
I do not think it should matter.
What version are using?
What do you have in config.properties
for worker and coordinator?
Here you have some docs where you could find these files: https://prestodb.io/presto-admin/docs/current/installation/presto-configuration.html#
HI kokosing,
Many thansk for the reply.
a) Packages being installed are prestoadmin-2.2-online.tar.gz presto-server-rpm-0.167-t.0.2.x86_64.rpm
b) My config.json file follows:
{
"username": "qg-presto01",
"coordinator": "172.17.233.164",
"workers": ["172.17.233.165","172.17.233.166"],
"java8_home":"/usr/java/jdk1.8.0_162/"
}
I had the coordinator as a worker too, but tried taking it out to see if it works, but it did not.
I am sorry, but I did not ask for presto-admin config.json
but Presto config.properties
.
Dear @kokosing we found the issue. In fact config.properties under /worker and /coordinator nodes where the discovery.uri field was set to
Hello,
I have the following config.json setup:
But I get this error:
Fatal error: [172.17.233.164] discovery.uri should not be localhost in a multi-node cluster, but found http://localhost:8080. You may have encountered this error by choosing a coordinator that is localhost and a worker that is not. The default discovery-uri is http://<coordinator>:8080
After that I get some error messages from the workers:
Fatal error: [172.17.233.xxx] sudo() received nonzero return code 1 while executing!
Why is that? Can´t the coordinator share the worker status in a multi-node cluster?