Closed DandyDeveloper closed 5 years ago
Hey @DandyDeveloper!
This is a weird one. If it wasn't possible for the headless service to be resolvable then it shouldn't be possible for the cluster to form at all. So this is an odd place for it to start failing.
For what its worth I can't reproduce this on GKE (it would also be caught by our automated integration tests). It's also possible for the cluster to recover from deleting all the pods at once which I have tested a lot.
Some questions from my side:
kubectl get pod elasticsearch-master-0 -o yaml
helm get elasticsearch-master
curl localhost:9200/_cluster/health?pretty=true
from inside of one of the pods. ping elasticsearch-master-headless
run form within one of the running containers. @Crazybus Sorry for not getting this to you sooner. I actually had a look at the specific node this was running against and it was infact an issue with the node itself.
Everything on the node was effective unable to resolve because it wasn't appropriately provisioned to access the kube router.
After fixing this and killing the pod, it started to behave. The data node also came up successfully. Sorry to have wasted your time with this.
Great to hear and thank you for following up!
same problem here ? how do u fix this?
@100cm As mentioned before, our problem was on-prem networking issue, nothing to do with the chart itself.
I faced same issue. In my case, firewalld blocks DNS request. Disabling firewalld (or permit 53/udp,tcp) fixes this issue.
Chart version: latest
Kubernetes version: 1.12.7
Kubernetes provider: E.g. GKE (Google Kubernetes Engine) Bare Metal
Helm Version: 2.14
Values.yaml:
Describe the bug: After successfully deploying the 3 masters, I have removed one to test recovery, but the master node cannot recover.
The service is running and the other masters are running successfully but the deleted pod cannot resolve the headless DNS (or any for that matter):
Steps to reproduce:
Expected behavior: The pod should recover successfully.
Provide logs and/or server output (if relevant):