bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.92k stars 9.18k forks source link

zookeeper pods not created after installing zookeeper with helm chart #10051

Closed nleeuskadi closed 2 years ago

nleeuskadi commented 2 years ago

Name and Version

bitnami/zookeeper 9.1.1

What steps will reproduce the bug?

  1. environment: KIND kubernetes installed on Ubuntu 20.04.4 LTS under WSL2 (windows 11)
  2. run the following command to install a zookeeper release: helm install zook bitnami/zookeeper --set persistence.enabled=false
  3. the statefulset zook-zookeeper stuck to 0 pods created:
    $ kubectl get -o wide statefulsets -n
    NAME              READY   AGE   CONTAINERS   IMAGES
    zook-zookeeper   0/1     11m   zookeeper    docker.io/bitnami/zookeeper:3.8.0-debian-10-r37

Are you using any custom parameters or values?

--set persistence.enabled=false

What is the expected behavior?

a zookeeper pod shall be created.

What do you see instead?

But no zookeeper pods are created

carrodher commented 2 years ago

Are you able to see any error when describing the statefulset?

kubectl describe sts zook-zookeeper
nleeuskadi commented 2 years ago

thank you @carrodher for your help. here below the return of the command you suggested. I can't see any errors :

Name:               zook-zookeeper
Namespace:          nifi
CreationTimestamp:  Thu, 05 May 2022 21:52:48 +0200
Selector:           app.kubernetes.io/component=zookeeper,app.kubernetes.io/instance=zook,app.kubernetes.io/name=zookeeper
Labels:             app.kubernetes.io/component=zookeeper
                    app.kubernetes.io/instance=zook
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=zookeeper
                    helm.sh/chart=zookeeper-9.1.1
                    role=zookeeper
Annotations:        meta.helm.sh/release-name: zook
                    meta.helm.sh/release-namespace: nifi
Replicas:           1 desired | 0 total
Update Strategy:    RollingUpdate
  Partition:        0
Pods Status:        0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/component=zookeeper
                    app.kubernetes.io/instance=zook
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=zookeeper
                    helm.sh/chart=zookeeper-9.1.1
  Service Account:  default
  Containers:
   zookeeper:
    Image:       docker.io/bitnami/zookeeper:3.8.0-debian-10-r37
    Ports:       2181/TCP, 2888/TCP, 3888/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /scripts/setup.sh
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:               false
      ZOO_DATA_LOG_DIR:
      ZOO_PORT_NUMBER:             2181
      ZOO_TICK_TIME:               2000
      ZOO_INIT_LIMIT:              10
      ZOO_SYNC_LIMIT:              5
      ZOO_PRE_ALLOC_SIZE:          65536
      ZOO_SNAPCOUNT:               100000
      ZOO_MAX_CLIENT_CNXNS:        60
      ZOO_4LW_COMMANDS_WHITELIST:  srvr, mntr, ruok
      ZOO_LISTEN_ALLIPS_ENABLED:   no
      ZOO_AUTOPURGE_INTERVAL:      0
      ZOO_AUTOPURGE_RETAIN_COUNT:  3
      ZOO_MAX_SESSION_TIMEOUT:     40000
      ZOO_SERVERS:                 zook-zookeeper-0.zook-zookeeper-headless.nifi.svc.cluster.local:2888:3888::1
      ZOO_ENABLE_AUTH:             no
      ZOO_HEAP_SIZE:               1024
      ZOO_LOG_LEVEL:               ERROR
      ALLOW_ANONYMOUS_LOGIN:       yes
      POD_NAME:                     (v1:metadata.name)
    Mounts:
      /bitnami/zookeeper from data (rw)
      /scripts/setup.sh from scripts (rw,path="setup.sh")
  Volumes:
   scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      zook-zookeeper-scripts
    Optional:  false
   data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
Volume Claims:  <none>
Events:         <none>
carrodher commented 2 years ago

Unfortunately I am not able to reproduce the issue:

$ helm install zook bitnami/zookeeper --set persistence.enabled=false
NAME: zook
LAST DEPLOYED: Fri May  6 09:06:56 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 9.1.1
APP VERSION: 3.8.0

** Please be patient while the chart is being deployed **

ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:

    zook-zookeeper.default.svc.cluster.local

To connect to your ZooKeeper server run the following commands:

    export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zook,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
    kubectl exec -it $POD_NAME -- zkCli.sh

To connect to your ZooKeeper server from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/zook-zookeeper 2181: &
    zkCli.sh 127.0.0.1:2181

$ helm ls
NAME    NAMESPACE   REVISION    UPDATED                                 STATUS      CHART                           APP VERSION
zook    default     1           2022-05-06 09:06:56.930992778 +0000 UTC deployed    zookeeper-9.1.1                 3.8.0

$ kubectl get sts
NAME    READY   AGE
zook-zookeeper    1/1     3m48s

$ kubectl get pods
NAME    READY   STATUS             RESTARTS   AGE
zook-zookeeper-0    1/1     Running            0          3m42s

Everything is up and running as expected, not sure if it is something related to your environment or any other configuration

nleeuskadi commented 2 years ago

ok thank you @carrodher for your time, I will investigate further and will give back to you

nleeuskadi commented 2 years ago

Hi, finally I completely delete and install my Kind cluster and now it works like a charm. I did not find out the reason of my problem. But since it disappeared, so it 's OK :) Again, thank you very much for your support @carrodher