Closed BezVezeE closed 6 years ago
i think i got it, it works fine
but still this errors bothers me,
Warning Unhealthy 10m (x2 over 10m) kubelet, gke-cluster-1-test-default-pool-626ce927-1jjt Liveness probe failed: dial tcp 10.8.0.6:9300: getsockopt: connection refused
Warning Unhealthy 10m kubelet, gke-cluster-1-test-default-pool-626ce927-1jjt Readiness probe failed: Get http://10.8.0.6:9200/_cluster/health: dial tcp 10.8.0.6:9200: getsockopt: connection refused
any reason why this happens and what is the problem,
Can you please explain how you resolved the issue, I am facing the same issue!
This is a networking issue! The kubelet
cannot access 10.8.0.6
ports 9200
or 9300
. Maybe something to do with firewall rules or some NetworkPolicy
you may have in place. Or maybe you changed the ports Elasticsearch binds to.
Hi, same issue here. Es-clients keep restarting with "Liveness probe failed: dial tcp 10.1.0.21:9300: getsockopt: connection refused". Tried with Minikube and Kubernetes in Docker for Mac Edge. Any help would be appreciated. Thank you
Maybe something related to how those solutions do networking and maybe need some changes to the network device it will bind on. See the troubleshooting section for pointers.
That was it, sorry I missed it! Just changed NETWORK_HOST to "eth0:ipv4" and it's working perfectly! Thanks a lot!
Great @mfamador! 🎉
hi @mfamador , i got the exact problem. but mine is when deploying es-master. do you know how can i proceed? @pires do you have an idea? Does this related to NETWORK_HOST issue also?
my environment is as below: OS : ubuntu Kubernetes : version 1.7.12
root@node1:~/ kubectl --version
Kubernetes v1.7.12+coreos.0
root@node1:~/ kubectl get pods
NAME READY STATUS RESTARTS AGE
es-master-2519959699-0l7v4 1/1 Running 0 32m
es-master-2519959699-j95l7 1/1 Running 1 32m
es-master-2519959699-zxb0h 1/1 Running 0 32m
heketi-deployment-309687121-7qn9l 1/1 Running 0 5h
root@node1:~/ kubectl describe pods es-master-2519959699-0l7v4
Name: es-master-2519959699-0l7v4
Namespace: default
Node: node5/192.168.0.115
Start Time: Fri, 06 Apr 2018 16:17:43 +0800
Labels: component=elasticsearch
pod-template-hash=2519959699
role=master
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"es-master-2519959699","uid":"fad9d18a-3972-11e8-b36c-080027cd863...
Status: Running
IP: 10.233.108.2
Created By: ReplicaSet/es-master-2519959699
Controlled By: ReplicaSet/es-master-2519959699
Init Containers:
init-sysctl:
Container ID: docker://6a7a62487575327f313c85442d4c3a49b2eeb29e6da8140cce127e3245b9c74a
Image: busybox:1.27.2
Image ID: docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 06 Apr 2018 16:17:44 +0800
Finished: Fri, 06 Apr 2018 16:17:44 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q0hd5 (ro)
Containers:
es-master:
Container ID: docker://0eca6f1f7d9e5f2099cda571dca850326268b9a0ef9e5a433cdb0403538f1304
Image: quay.io/pires/docker-elasticsearch-kubernetes:6.2.2_1
Image ID: docker-pullable://quay.io/pires/docker-elasticsearch-kubernetes@sha256:180f9d8779ed7d3724f52831b6071e338b0f276e8fe8f146dd2e8c7f5c8975dd
Port: 9300/TCP
State: Running
Started: Fri, 06 Apr 2018 16:17:45 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1
Requests:
cpu: 1
Liveness: tcp-socket :transport delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
NAMESPACE: default (v1:metadata.namespace)
NODE_NAME: es-master-2519959699-0l7v4 (v1:metadata.name)
CLUSTER_NAME: myesdb
NUMBER_OF_MASTERS: 2
NODE_MASTER: true
NODE_INGEST: false
NODE_DATA: false
HTTP_ENABLE: false
ES_JAVA_OPTS: -Xms256m -Xmx256m
PROCESSORS: 1 (limits.cpu)
Mounts:
/data from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q0hd5 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-q0hd5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q0hd5
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
32m 32m 1 default-scheduler Normal Scheduled Successfully assigned es-master-2519959699-0l7v4 to node5
32m 32m 1 kubelet, node5 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "storage"
32m 32m 1 kubelet, node5 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-q0hd5"
32m 32m 1 kubelet, node5 spec.initContainers{init-sysctl} Normal Pulled Container image "busybox:1.27.2" already present on machine
32m 32m 1 kubelet, node5 spec.initContainers{init-sysctl} Normal Created Created container
32m 32m 1 kubelet, node5 spec.initContainers{init-sysctl} Normal Started Started container
32m 32m 1 kubelet, node5 spec.containers{es-master} Normal Pulled Container image "quay.io/pires/docker-elasticsearch-kubernetes:6.2.2_1" already present on machine
32m 32m 1 kubelet, node5 spec.containers{es-master} Normal Created Created container
32m 32m 1 kubelet, node5 spec.containers{es-master} Normal Started Started container
32m 32m 2 kubelet, node5 spec.containers{es-master} Warning Unhealthy Liveness probe failed: dial tcp 10.233.108.2:9300: getsockopt: connection refused
Thanks a lot!
@syafiqFiqq @BezVezeE I got the same error, set a environment param has no effect, did you resolved?
From this issues https://github.com/pires/kubernetes-elasticsearch-cluster/issues/175 ,I get the solution. The point is "initialDelaySeconds: 30"
Hi there,
i have issue trying to setup my kubernetes es cluster, when i deploy the pods and services everything going well but when i try to curl the client pod it give me connection error
im running this on a google container engine (kubernetes engine) on kubernetes version 1.8.4 i also tried this to run on version 1.7.4 also have the same problem,
i tried with differnt container version both 6.1.1 and 5.6.4 but still the same problem