Closed sylvainOL closed 4 years ago
Occur the same: I use the yaml from git repo,not helm k8s:1.14.5
kubectl describe pod zookeeper-operator-8449df9744-lkx9j Events: Type Reason Age From Message
Normal Scheduled 2d11h default-scheduler Successfully assigned default/zookeeper-operator-8449df9744-lkx9j to k8stian-n3 Normal Pulled 2d11h kubelet, k8stian-n3 Container image "docker.io/istio/proxy_init:1.2.2" already present on machine Normal Created 2d11h kubelet, k8stian-n3 Created container istio-init Normal Started 2d11h kubelet, k8stian-n3 Started container istio-init Normal Pulled 2d11h kubelet, k8stian-n3 Container image "docker.io/istio/proxyv2:1.2.2" already present on machine Normal Started 2d11h kubelet, k8stian-n3 Started container istio-proxy Normal Created 2d11h kubelet, k8stian-n3 Created container istio-proxy Normal Pulling 2d11h (x2 over 2d11h) kubelet, k8stian-n3 Pulling image "pravega/zookeeper-operator:latest" Normal Pulled 2d11h (x2 over 2d11h) kubelet, k8stian-n3 Successfully pulled image "pravega/zookeeper-operator:latest" Normal Created 2d11h (x2 over 2d11h) kubelet, k8stian-n3 Created container zookeeper-operator Normal Started 2d11h (x2 over 2d11h) kubelet, k8stian-n3 Started container zookeeper-operator Warning Unhealthy 2m18s (x106771 over 2d11h) kubelet, k8stian-n3 Readiness probe failed: HTTP probe failed with statuscode: 503
[root@k8stian-m2:/usr/local/src/deploy/zookeeper-operator/deploy/crds]# kubectl logs -f zookeeper-operator-8449df9744-lkx9j Error from server (BadRequest): a container name must be specified for pod zookeeper-operator-8449df9744-lkx9j, choose one of: [zookeeper-operator istio-proxy] or one of the init containers: [istio-init] [root@k8stian-m2:/usr/local/src/deploy/zookeeper-operator/deploy/crds]# kubectl logs -f zookeeper-operator-8449df9744-lkx9j -c zookeeper-operator {"level":"info","ts":1575336376.1156788,"logger":"cmd","msg":"zookeeper-operator Version: 0.2.3-17"} {"level":"info","ts":1575336376.1157143,"logger":"cmd","msg":"Git SHA: cea93ee"} {"level":"info","ts":1575336376.1157634,"logger":"cmd","msg":"Go Version: go1.12.10"} {"level":"info","ts":1575336376.1157687,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"} {"level":"info","ts":1575336376.1157732,"logger":"cmd","msg":"operator-sdk Version: v0.3.0"} {"level":"info","ts":1575336376.1163554,"logger":"leader","msg":"Trying to become the leader."} {"level":"info","ts":1575336376.294069,"logger":"leader","msg":"No pre-existing lock was found."} {"level":"info","ts":1575336376.3139248,"logger":"leader","msg":"Became the leader."} {"level":"info","ts":1575336376.3809543,"logger":"cmd","msg":"Registering Components."} {"level":"info","ts":1575336376.3813946,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"zookeepercluster-controller","source":"kind source: /, Kind="} {"level":"info","ts":1575336376.3815124,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"zookeepercluster-controller","source":"kind source: /, Kind="} {"level":"info","ts":1575336376.3815713,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"zookeepercluster-controller","source":"kind source: /, Kind="} {"level":"info","ts":1575336376.381645,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"zookeepercluster-controller","source":"kind source: /, Kind="} {"level":"info","ts":1575336376.3816953,"logger":"cmd","msg":"Starting the Cmd."} {"level":"info","ts":1575336376.5820937,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"zookeepercluster-controller"} {"level":"info","ts":1575336376.6822746,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"zookeepercluster-controller","worker count":1}
@seecsea @sylvainOL Thanks for reporting this. I'll take a look ASAP.
Any updates here?
To fix this we perhaps need to set quorumListenOnAllIPs
to true in Zookeeper:
https://github.com/istio/istio.io/blob/master/content/en/faq/applications/zookeeper.md
Yes it's still not working for me as well.
When you turn on quorumListenOnAllIPs
you are going to start having this problem but if you use my health checks it will work around the problem.
Hello,
I've deployed a kubernetes cluster with istio. When trying to deploy a 3 nodes ZooKeeper Cluster, the second one can't start because of immediate closed connections.
As banzai cloud has made a blog post on Kafka (+ZK?) on Istio (https://banzaicloud.com/blog/kafka-on-istio-performance/), and they propose to use your operator (https://github.com/banzaicloud/kafka-operator), I assumed it's possible but don't see how :-/
I've tried to disable mTLS but doesn't seem to be the issue.
here's the way I deployed:
here's the logs: