bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.98k stars 9.2k forks source link

mongodb (v 12.1.7) won't start on a fresh minikube instance #10315

Closed Anticom closed 2 years ago

Anticom commented 2 years ago

Name and Version

bitnami/mongodb 12.1.7

What steps will reproduce the bug?

minikube config set cpus 4
minikube config set memory 4933
minikube start
kubectl create ns mongodb
helm upgrade --install -n mongodb --version 12.0.0 mongodb bitnami/mongodb

Are you using any custom parameters or values?

No, using default parameters

What is the expected behavior?

MongoDB should start with standalone architecture.

What do you see instead?

mongodb pod starts but will never become ready. When looking at the logs, they never get past the last line in this snippet. After some time the pod will restart (due to probes failing presumably) and then the same happens over and over again:

mongodb 08:15:06.66 
mongodb 08:15:06.69 Welcome to the Bitnami mongodb container
mongodb 08:15:06.71 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb 08:15:06.73 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb 08:15:06.75 
mongodb 08:15:06.77 INFO  ==> ** Starting MongoDB setup **
mongodb 08:15:06.95 INFO  ==> Validating settings in MONGODB_* env vars...

Here's the environment inside the container:

I have no name!@mongodb-6c54979779-ltvpq:/$ printenv 
_=/usr/bin/printenv
OS_ARCH=amd64
MONGODB_SERVICE_PORT_MONGODB=27017
PATH=/opt/bitnami/common/bin:/opt/bitnami/mongodb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OS_NAME=linux
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
ALLOW_EMPTY_PASSWORD=no
MONGODB_DISABLE_JAVASCRIPT=no
MONGODB_SYSTEM_LOG_VERBOSITY=0
APP_VERSION=5.0.8
MONGODB_ENABLE_DIRECTORY_PER_DB=no
MONGODB_ROOT_PASSWORD=uc1U2xVJsT
MONGODB_PORT=tcp://10.96.144.125:27017
MONGODB_ENABLE_JOURNAL=yes
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
BITNAMI_APP_NAME=mongodb
KUBERNETES_PORT_443_TCP_PROTO=tcp
SHLVL=1
TERM=xterm
MONGODB_PORT_27017_TCP_PORT=27017
MONGODB_PORT_NUMBER=27017
BITNAMI_DEBUG=false
MONGODB_ENABLE_IPV6=no
MONGODB_SERVICE_HOST=10.96.144.125
MONGODB_PORT_27017_TCP_ADDR=10.96.144.125
MONGODB_PORT_27017_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
MONGODB_ROOT_USER=root
HOME=/
MONGODB_DISABLE_SYSTEM_LOG=no
OS_FLAVOUR=debian-10
PWD=/
MONGODB_SERVICE_PORT=27017
MONGODB_PORT_27017_TCP=tcp://10.96.144.125:27017
HOSTNAME=mongodb-6c54979779-ltvpq
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443

Additional information

$ minikube version
minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7
$ helm version --short
v3.9.0+g7ceeda6
$ kubectl version --output yaml
clientVersion:
  buildDate: "2022-05-03T13:36:49Z"
  compiler: gc
  gitCommit: 4ce5a8954017644c5420bae81d72b09b735c21f0
  gitTreeState: clean
  gitVersion: v1.24.0
  goVersion: go1.18.1
  major: "1"
  minor: "24"
  platform: darwin/arm64
kustomizeVersion: v4.5.4
serverVersion:
  buildDate: "2022-01-25T21:19:12Z"
  compiler: gc
  gitCommit: 816c97ab8cff8a1c72eccca1026f7820e93e0d25
  gitTreeState: clean
  gitVersion: v1.23.3
  goVersion: go1.17.6
  major: "1"
  minor: "23"
  platform: linux/arm64
$ kubectl get events -n mongodb
LAST SEEN   TYPE      REASON                  OBJECT                          MESSAGE
12m         Normal    Scheduled               pod/mongodb-6c54979779-ltvpq    Successfully assigned mongo/mongodb-6c54979779-ltvpq to minikube
12m         Normal    Pulling                 pod/mongodb-6c54979779-ltvpq    Pulling image "docker.io/bitnami/mongodb:5.0.8-debian-10-r9"
12m         Normal    Pulled                  pod/mongodb-6c54979779-ltvpq    Successfully pulled image "docker.io/bitnami/mongodb:5.0.8-debian-10-r9" in 11.409829339s
12m         Normal    Created                 pod/mongodb-6c54979779-ltvpq    Created container mongodb
12m         Normal    Started                 pod/mongodb-6c54979779-ltvpq    Started container mongodb
2m32s       Warning   Unhealthy               pod/mongodb-6c54979779-ltvpq    Readiness probe failed: command "/bitnami/scripts/readiness-probe.sh" timed out
10m         Warning   Unhealthy               pod/mongodb-6c54979779-ltvpq    Liveness probe failed: command "/bitnami/scripts/ping-mongodb.sh" timed out
12m         Normal    SuccessfulCreate        replicaset/mongodb-6c54979779   Created pod: mongodb-6c54979779-ltvpq
21m         Normal    Scheduled               pod/mongodb-74f67dbbf8-f9fvq    Successfully assigned mongo/mongodb-74f67dbbf8-f9fvq to minikube
21m         Normal    Pulled                  pod/mongodb-74f67dbbf8-f9fvq    Container image "docker.io/bitnami/mongodb:5.0.8-debian-10-r20" already present on machine
21m         Normal    Created                 pod/mongodb-74f67dbbf8-f9fvq    Created container mongodb
21m         Normal    Started                 pod/mongodb-74f67dbbf8-f9fvq    Started container mongodb
19m         Warning   Unhealthy               pod/mongodb-74f67dbbf8-f9fvq    Readiness probe failed: command "/bitnami/scripts/readiness-probe.sh" timed out
19m         Warning   Unhealthy               pod/mongodb-74f67dbbf8-f9fvq    Liveness probe failed: command "/bitnami/scripts/ping-mongodb.sh" timed out
19m         Normal    Killing                 pod/mongodb-74f67dbbf8-f9fvq    Container mongodb failed liveness probe, will be restarted
16m         Normal    Killing                 pod/mongodb-74f67dbbf8-f9fvq    Stopping container mongodb
34m         Normal    Scheduled               pod/mongodb-74f67dbbf8-hmrbc    Successfully assigned mongo/mongodb-74f67dbbf8-hmrbc to minikube
34m         Normal    Pulled                  pod/mongodb-74f67dbbf8-hmrbc    Container image "docker.io/bitnami/mongodb:5.0.8-debian-10-r20" already present on machine
34m         Normal    Created                 pod/mongodb-74f67dbbf8-hmrbc    Created container mongodb
34m         Normal    Started                 pod/mongodb-74f67dbbf8-hmrbc    Started container mongodb
29m         Warning   Unhealthy               pod/mongodb-74f67dbbf8-hmrbc    Readiness probe failed: command "/bitnami/scripts/readiness-probe.sh" timed out
32m         Warning   Unhealthy               pod/mongodb-74f67dbbf8-hmrbc    Liveness probe failed: command "/bitnami/scripts/ping-mongodb.sh" timed out
33m         Warning   Unhealthy               pod/mongodb-74f67dbbf8-hmrbc    Liveness probe failed: Current Mongosh Log ID:   6285f6d8c991a1d942ad65b6...
32m         Normal    Killing                 pod/mongodb-74f67dbbf8-hmrbc    Container mongodb failed liveness probe, will be restarted
25m         Warning   FailedScheduling        pod/mongodb-74f67dbbf8-tzw6f    0/1 nodes are available: 1 persistentvolumeclaim "mongodb" is being deleted.
22m         Warning   FailedScheduling        pod/mongodb-74f67dbbf8-tzw6f    0/1 nodes are available: 1 persistentvolumeclaim "mongodb" not found.
22m         Warning   FailedScheduling        pod/mongodb-74f67dbbf8-tzw6f    skip schedule deleting pod: mongo/mongodb-74f67dbbf8-tzw6f
34m         Normal    SuccessfulCreate        replicaset/mongodb-74f67dbbf8   Created pod: mongodb-74f67dbbf8-hmrbc
25m         Normal    SuccessfulCreate        replicaset/mongodb-74f67dbbf8   Created pod: mongodb-74f67dbbf8-tzw6f
21m         Normal    SuccessfulCreate        replicaset/mongodb-74f67dbbf8   Created pod: mongodb-74f67dbbf8-f9fvq
34m         Normal    ExternalProvisioning    persistentvolumeclaim/mongodb   waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
34m         Normal    Provisioning            persistentvolumeclaim/mongodb   External provisioner is provisioning volume for claim "mongo/mongodb"
34m         Normal    ProvisioningSucceeded   persistentvolumeclaim/mongodb   Successfully provisioned volume pvc-8ad983dd-27e3-4d57-b13b-77c856d8c5ea
34m         Normal    ScalingReplicaSet       deployment/mongodb              Scaled up replica set mongodb-74f67dbbf8 to 1
25m         Normal    ScalingReplicaSet       deployment/mongodb              Scaled up replica set mongodb-74f67dbbf8 to 1
21m         Normal    Provisioning            persistentvolumeclaim/mongodb   External provisioner is provisioning volume for claim "mongo/mongodb"
21m         Normal    ExternalProvisioning    persistentvolumeclaim/mongodb   waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
21m         Normal    ProvisioningSucceeded   persistentvolumeclaim/mongodb   Successfully provisioned volume pvc-91124091-2aab-4cd8-b6e3-2cbf67461ed3
21m         Normal    ScalingReplicaSet       deployment/mongodb              Scaled up replica set mongodb-74f67dbbf8 to 1
12m         Normal    Provisioning            persistentvolumeclaim/mongodb   External provisioner is provisioning volume for claim "mongo/mongodb"
12m         Normal    ExternalProvisioning    persistentvolumeclaim/mongodb   waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
12m         Normal    ProvisioningSucceeded   persistentvolumeclaim/mongodb   Successfully provisioned volume pvc-b88b96b8-b9ac-4540-af30-e5e0e842f66c
12m         Normal    ScalingReplicaSet       deployment/mongodb              Scaled up replica set mongodb-6c54979779 to 1

Please note that the events might be duplicated here and there since I was repeatedly re-installing the chart to figure out whether I could fix the issue myself by modifying some settings.

Anticom commented 2 years ago

FYI: I'm experiencing the same issue with bitnami/mongodb-sharded (chat version 5.0.8)

jmConan commented 2 years ago

i might be wrong, but i read that ARM64 is entirely NOT supported.

Anticom commented 2 years ago

@jmConan I would have thought that the container would refuse to start if the platform was the issue. (See docker/for-mac#6137 for example, which is a common issue for lots of amd64 images.) Either way IMHO it would be nice to get some proper message out of the bitnami container rather than just stopping to log anything until being killed by k8s due to not becoming ready / alive within the set limits.

carrodher commented 2 years ago

Hi, we have created this issue https://github.com/bitnami/charts/issues/7305 that is pinned in the Bitnami Helm Charts repository, on this way we can funnel all the conversation in a single place regarding ARM64 support. We will close the rest of the existing issues just to avoid duplications, please visit the above-mentioned issue to see any new (when possible) about this topic.