bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

[bitnami/mongodb-sharded] Sharded-data instances failing with incorrect shardsvr.service.name value #6808

Closed emmanuelm41 closed 3 years ago

emmanuelm41 commented 3 years ago

Which chart: mongodb-sharded v3.7.0

Describe the bug When I deployed this chart in a Kubernetes cluster, mongos and configsrv were successfully initialized, but the shared-data-X were not. They fail with the legend "cannot resolve pre-prod-mongodb-service: no such host". The sharded-data look for mongos instance with the env variable MONGODB_MONGOS_HOST, but this variable is set to common.names.fullname and not to mongodb-sharded.serviceName.

To Reproduce Steps to reproduce the behavior:

  1. Deploy the chart with the following values
    
    # This file and all contents in it are OPTIONAL.

The namespace this chart will be installed to,

if not specified the chart will be installed to "default"

namespace: dev

Custom helm options

helm:

The directory of the chart in the repo. Also any valid go-getter supported

URL can be used there is specify where to download the chart from.

If repo below is set this value if the chart name in the repo

chart: "mongodb-sharded"

An https to a valid Helm repository to download the chart from

repo: "https://charts.bitnami.com/bitnami"

Used if repo is set to look up the version of the chart

version: "3.7.0"

Force recreate resource that can not be updated

force: false

How long for helm to wait for the release to be active. If the value

is less that or equal to zero, we will not wait in Helm

timeoutSeconds: 600

Custom values that will be passed as values.yaml to the installation

Global Docker image parameters

Please, note that this will override the image parameters, including dependencies, configured to use the global value

Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass

global:

imageRegistry: myRegistryName

imagePullSecrets:

- myRegistryKeySecretName

storageClass: myStorageClass

Bitnami MongoDB(R) Sharded image version

ref: https://hub.docker.com/r/bitnami/mongodb-sharded/tags/

values:

MongoDB(R) credentials

##

## Name of a secret containing all the credentials above
## ref:
##
## existingSecret: name-of-existing-secret
##
existingSecret: pre-prod-mongodb-secret

## Mount credentials as files instead of using environment variables
##
usePasswordFile: false

## Number of MongoDB(R) Shards
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-shards/
##
shards: 2

## Shard replica set properties
## ref: https://docs.mongodb.com/manual/replication/index.html
##
shardsvr:
  ## Properties for data nodes (primary and secondary)
  ##
  dataNode:
    ## Number of replicas. A value of replicas=1 is simply a primary node
    ##
    replicas: 1
    ## Node labels for pod assignment
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ## You can set dataNodeLoopId (or any other parameter) by setting the below code block under this 'nodeSelector' section:
    ## nodeSelector: { shardId: "{{ .dataNodeLoopId }}" }
    ##
    nodeSelector:
      type: "dedicated"
    ## Tolerations for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ## You can set dataNodeLoopId (or any other parameter) by setting the below code block under this 'nodeSelector' section:
    ## tolerations:
    ## - key: "shardId"
    ##   operator: "Equal"
    ##   value: "{{ .dataNodeLoopId }}"
    ##   effect: "NoSchedule"
    ##
    tolerations:
      - key: "db"
        operator: "Equal"
        value: "true"
        effect: "NoSchedule"

  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    enabled: true
    ## The path the volume will be mounted at, useful when using different
    ## MongoDB(R) images.
    ##
    mountPath: /bitnami/mongodb

    ## The subdirectory of the volume to mount to, useful in dev environments
    ## and one PV for multiple services.
    ##
    subPath: ""

    ## mongodb data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: "local-path"
    accessModes:
      - ReadWriteOnce
    ## PersistentVolumeClaim size
    ##
    size: 4Ti
    ## Additional volume annotations
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    annotations: { }

## Config Server replica set properties
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/
##
configsvr:
  ## Number of replicas. A value of replicas=1 is simply a primary node
  ##
  replicas: 1
  ## Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector:
    type: "dedicated"
  ## Tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations:
    - key: "db"
      operator: "Equal"
      value: "true"
      effect: "NoSchedule"
  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    enabled: true
    ## The path the volume will be mounted at, useful when using different
    ## MongoDB(R) images.
    ##
    mountPath: /bitnami/mongodb

    ## The subdirectory of the volume to mount to, useful in dev environments
    ## and one PV for multiple services.
    ##
    subPath: ""

    ## mongodb data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: "local-path"
    accessModes:
      - ReadWriteOnce
    ## PersistentVolumeClaim size
    ##
    size: 1Ti
    ## Additional volume annotations
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    annotations: { }

## Mongos properties
## ref: https://docs.mongodb.com/manual/reference/program/mongos/#bin.mongos
##
mongos:
  ## Use StatefulSet instead of Deployment
  ##
  useStatefulSet: true
  ## Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector:
    type: "dedicated"
  ## Tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations:
    - key: "db"
      operator: "Equal"
      value: "true"
      effect: "NoSchedule"

## Kubernetes service type
## ref: https://kubernetes.io/docs/concepts/services-networking/service/
##
service:
  ## Specify an explicit service name
  ##
  name: pre-prod-mongodb-service
  ## Additional service annotations (evaluate as a template)
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  annotations:
    load-balancer.hetzner.cloud/location: "nbg1"
  ## Service type
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  ##
  type: LoadBalancer
  ## MongoDB(R) Service port and Container Port
  ##
  port: 27017
  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePort: 31922

## Prometheus Exporter / Metrics
##
metrics:
  enabled: true

  ## Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  # resources: {}

  ## Metrics exporter liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ##
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
  ##
  podMonitor:
    ## If the operator is installed in your cluster, set to true to create a PodMonitor entry
    ##
    enabled: true
    ## Specify the namespace in which the podMonitor resource will be created
    ##
    namespace: default
2. You will see the sharded-data-x failing 

**Expected behavior**
It should not allow you to set the value to a different one if this will lead you to an error 

**Version of Helm and Kubernetes**:

- Output of `helm version`:

version.BuildInfo{Version:"v3.3.3-rancher3", GitCommit:"657df59bbba1d9e175cf5080d4885bd57d037906", GitTreeState:"clean", GoVersion:"go1.13.15"}


- Output of `kubectl version`:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.6", GitCommit:"8a62859e515889f07e3e3be6a1080413f17cf2c3", GitTreeState:"clean", BuildDate:"2021-04-15T03:19:55Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

carrodher commented 3 years ago

Hi, I was able to reproduce the issue, and tried to fix it with the changes you are proposing; although the variable is now properly set, the data pods are not up and running. Those are the changes:

@ bitnami/mongodb-sharded/templates/shard/shard-data-statefulset.yaml:121 @ spec:
                fieldRef:
                  fieldPath: metadata.name
            - name: MONGODB_MONGOS_HOST
-             value: {{ include "common.names.fullname" $ }}
+             value: {{ include "mongodb-sharded.serviceName" $ }}
            - name: MONGODB_INITIAL_PRIMARY_HOST
              value: {{ printf "%s-shard%d-data-0.%s-headless.%s.svc.%s" (include "common.names.fullname" $ ) $i (include "common.names.fullname" $ ) $.Release.Namespace $.Values.clusterDomain }}
            - name: MONGODB_REPLICA_SET_NAME

@ bitnami/mongodb-sharded/values.yaml:786 @ mongos:
  schedulerName:
  ## Use StatefulSet instead of Deployment
  ##
- useStatefulSet: false
+ useStatefulSet: true
  ## When using a statefulset, you can enable one service per replica
  ## This is useful when exposing the mongos through load balancers to make sure clients
  ## connect to the same mongos and therefore can follow their cursors
@ bitnami/mongodb-sharded/values.yaml:989 @ clusterDomain: cluster.local
service:
  ## Specify an explicit service name
  ##
- name:
+ name: my-new-service
  ## Additional service annotations (evaluate as a template)
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
@ bitnami/mongodb-sharded/values.yaml:997 @ service:
  ## Service type
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  ##
- type: ClusterIP
+ type: LoadBalancer
  ## External traffic policy
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  ##

And this is how it is rendered:

$ helm template mongo --set service.name=my-new-service . | grep 'MONGODB_MONGO' -C 4
zsh: correct 'template' to 'templates' [nyae]? n
            - name: MONGODB_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MONGODB_MONGOS_HOST
              value: my-new-service
            - name: MONGODB_INITIAL_PRIMARY_HOST
              value: mongo-mongodb-sharded-shard0-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local
            - name: MONGODB_REPLICA_SET_NAME
--
            - name: MONGODB_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MONGODB_MONGOS_HOST
              value: my-new-service
            - name: MONGODB_INITIAL_PRIMARY_HOST
              value: mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local
            - name: MONGODB_REPLICA_SET_NAME

But when installing the chart, although the env. var. is properly set, the issue is still there:

$ kubectl get pods
NAME                                   READY   STATUS             RESTARTS   AGE
mongo-mongodb-sharded-configsvr-0     1/1     Running            0          22m
mongo-mongodb-sharded-mongos-0        1/1     Running            0          22m
mongo-mongodb-sharded-shard0-data-0   0/1     Running            8          22m
mongo-mongodb-sharded-shard1-data-0   0/1     Running            8          22m

$ kubectl logs -f mongo-mongodb-sharded-shard1-data-0
 09:25:09.54 INFO  ==> Setting node as primary
mongodb 09:25:09.57
mongodb 09:25:09.58 Welcome to the Bitnami mongodb-sharded container
mongodb 09:25:09.58 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb 09:25:09.58 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb 09:25:09.58
mongodb 09:25:09.58 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 09:25:09.61 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 09:25:09.63 INFO  ==> Initializing MongoDB Sharded...
mongodb 09:25:09.65 INFO  ==> Writing keyfile for replica set authentication...
mongodb 09:25:09.66 INFO  ==> Enabling authentication...
mongodb 09:25:09.67 INFO  ==> Deploying MongoDB Sharded with persisted data...
mongodb 09:25:09.68 INFO  ==> Trying to connect to MongoDB server my-service-name...
cannot resolve host "my-new-service": lookup my-new-service on 10.30.240.10:53: no such host
cannot resolve host "my-new-service": lookup my-new-service on 10.30.240.10:53: no such host
cannot resolve host "my-new-service": lookup my-new-service on 10.30.240.10:53: no such host
cannot resolve host "my-new-service": lookup my-new-service on 10.30.240.10:53: no such host

It should be something else, I will continue taking a look. If you are able to find any clue on your side we'll be happy to review any PR/contribution. Thanks!

emmanuelm41 commented 3 years ago

Thanks for your quick answer. The way I fixed it was using the fullname as the value on shardsvr.service.name. I took the value the chart generated as fullname, and set it as service name. In that way, everything worked.

In your examlple, I should have worked! Could you check if the LoadBalancer service is up? It is not necessary to use a LoadBalancer, it could be a ClusterIP too. You need a service provider in order to set a LoadBalancer. I am not sure where you did the test. I did it on a cloud service provider.

carrodher commented 3 years ago

Yep, you're totally right. I was testing it on a shared cluster and I thought it was configured in a different way.

Everything is up and running with the above change:

$ kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
mongo3-mongodb-sharded-configsvr-0     1/1     Running   0          22m
mongo3-mongodb-sharded-mongos-0        1/1     Running   0          22m
mongo3-mongodb-sharded-shard0-data-0   1/1     Running   0          22m
mongo3-mongodb-sharded-shard1-data-0   1/1     Running   0          22m

I just created a PR (see below) addressing this issue.