bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

[bitnami/mongodb-sharded] Application unable to connect to mongos that talks to external config server and shard data, observed pid is not found #7988

Closed sudheersagi closed 2 years ago

sudheersagi commented 2 years ago

Which chart: mongodb-sharded appVersion 4.4.10 version 3.9.14

Describe the bug I am running the chart on AWS EKS. Our architecture is in such a way that we want to run only mongos within our Application cluster as a service and connecting to external config server.Here our config server and shard data are centralised and running in another cluster within network which is common for multiple services.

I deployed mongodb-sharded chart and provide the necessary details in values.yaml as mentioned in README section https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded#using-an-external-config-server. Also Attached the values yaml for your reference and logs.

Mongos Logs

mongodb 07:31:22.14 
mongodb 07:31:22.14 Welcome to the Bitnami mongodb-sharded container
mongodb 07:31:22.14 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb 07:31:22.14 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb 07:31:22.14 
mongodb 07:31:22.14 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 07:31:22.17 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 07:31:22.18 INFO  ==> Initializing Mongos...
mongodb 07:31:22.19 WARN  ==> Mounted mongos configuration file as mongodb.conf. Copying it to mongos.conf
mongodb 07:31:22.19 INFO  ==> Writing keyfile for replica set authentication...
mongodb 07:31:22.21 DEBUG ==> Waiting for primary node...
mongodb 07:31:22.21 DEBUG ==> Waiting for primary node...
mongodb 07:31:22.21 INFO  ==> Trying to connect to MongoDB server configsvr0.example.com...
mongodb 07:31:22.23 INFO  ==> Found MongoDB server listening at configsvr0.example.com:27019 !

Values.yaml

## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global storage class for dynamic provisioning
##
global:
  imageRegistry: ""
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: []
  storageClass: ""

## @section Common parameters

## @param nameOverride String to partially override mongodb.fullname template (will maintain the release name)
##
nameOverride: ""
## @param fullnameOverride String to fully override mongodb.fullname template
##
fullnameOverride: ""
## @param clusterDomain Kubernetes Cluster Domain
## ref: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction
##
clusterDomain: cluster.local

## Enable diagnostic mode in the deployment
##
diagnosticMode:
  ## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
  ##
  enabled: false
  ## @param diagnosticMode.command Command to override all containers in the deployment
  ##
  command:
    - sleep
  ## @param diagnosticMode.args Args to override all containers in the deployment
  ##
  args:
    - infinity

# Deployment environment
env: load

## @section MongoDB® Sharded parameters

## Bitnami MongoDB® Sharded image version
## ref: https://hub.docker.com/r/bitnami/mongodb-sharded/tags/
## @param image.registry MongoDB® Sharded image registry
## @param image.repository MongoDB® Sharded Image name
## @param image.tag MongoDB® Sharded image tag (immutable tags are recommended)
## @param image.pullPolicy MongoDB® Sharded image pull policy
## @param image.pullSecrets Specify docker-registry secret names as an array
## @param image.debug Specify if debug logs should be enabled
##
image:
  registry: docker.io
  repository: bitnami/mongodb-sharded
  tag: 4.4.10-debian-10-r15
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ## e.g:
  ## pullSecrets:
  ##   - myRegistryKeySecretName
  ##
  pullSecrets: []
  ## Set to true if you would like to see extra information on logs
  ##
  debug: true
## MongoDB® credentials
## @param mongodbRootPassword MongoDB® root password
## If set to null it will be randomly generated
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
## e.g:
## mongodbRootPassword: password
##
mongodbRootPassword: ""
## @param replicaSetKey Replica Set key (shared for shards and config servers)
## e.g:
## replicaSetKey: testkey123
##
replicaSetKey: ""
## @param existingSecret Existing secret with MongoDB® credentials
## e.g:
## existingSecret: name-of-existing-secret
##
existingSecret: ""
## @param usePasswordFile Mount credentials as files instead of using environment variables
##
usePasswordFile: false
## @param shards Number of shards to be created
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-shards/
##
#shards: 2
## Properties for all of the pods in the cluster (shards, config servers and mongos)
##
common:
  ## @param common.mongodbEnableNumactl Enable launch MongoDB instance prefixed with "numactl --interleave=all"
  ## ref: https://docs.mongodb.com/manual/administration/production-notes/#mongodb-and-numa-hardware
  ##
  mongodbEnableNumactl: false
  ## @param common.useHostnames Enable DNS hostnames in the replica set config
  ##
  useHostnames: true
  ## @param common.mongodbEnableIPv6 Switch to enable/disable IPv6 on MongoDB®
  ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
  ##
  mongodbEnableIPv6: false
  ## @param common.mongodbDirectoryPerDB Switch to enable/disable DirectoryPerDB on MongoDB®
  ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
  ##
  mongodbDirectoryPerDB: false
  ## @param common.mongodbSystemLogVerbosity MongoDB® system log verbosity level
  ## ref: https://docs.mongodb.com/manual/reference/program/mongo/#cmdoption-mongo-ipv6
  ##
  mongodbSystemLogVerbosity: 0
  ## @param common.mongodbDisableSystemLog Whether to disable MongoDB® system log or not
  ## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
  ##
  mongodbDisableSystemLog: false
  ## @param common.mongodbMaxWaitTimeout Maximum time (in seconds) for MongoDB® nodes to wait for another MongoDB® node to be ready
  ##
  mongodbMaxWaitTimeout: 120
  ## @param common.initScriptsCM Configmap with init scripts to execute
  ##
  initScriptsCM: ""
  ## @param common.initScriptsSecret Secret with init scripts to execute (for sensitive data)
  ##
  initScriptsSecret: ""
  ## @param common.extraEnvVars An array to add extra env vars
  ## For example:
  ## extraEnvVars:
  ##  - name: KIBANA_ELASTICSEARCH_URL
  ##    value: test
  ##
  extraEnvVars: 
    - name: MONGODB_CFG_PRIMARY_PORT_NUMBER
      value: "27019"
    - name: MONGODB_MOUNTED_CONF_DIR
      value: /bitnami/mongodb/conf
  ## @param common.extraEnvVarsCM Name of a ConfigMap containing extra env vars
  ##
  extraEnvVarsCM: ""
  ## @param common.extraEnvVarsSecret Name of a Secret containing extra env vars
  ##
  extraEnvVarsSecret: ""
  ## @param common.sidecars Add sidecars to the pod
  ## For example:
  ## sidecars:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
  ##         containerPort: 1234
  ##
  sidecars: []
  ## @param common.initContainers Add init containers to the pod
  ## For example:
  ## initcontainers:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##
  initContainers: []
  ## @param common.podAnnotations Additional pod annotations
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: {}
  ## @param common.podLabels Additional pod labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}
  ## @param common.extraVolumes Array to add extra volumes
  ##
  extraVolumes: []
  ## @param common.extraVolumeMounts Array to add extra mounts (normally used with extraVolumes)
  ##
  extraVolumeMounts: []
  ## K8s Service Account.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  ##
  serviceAccount:
    ## @param common.serviceAccount.create Whether to create a Service Account for all pods automatically
    ##
    create: false
    ## @param common.serviceAccount.name Name of a Service Account to be used by all Pods
    ## If not set and create is true, a name is generated using the XXX.fullname template
    ##
    name: ""
## Init containers parameters:
## volumePermissions: Change the owner and group of the persistent volume mountpoint to runAsUser:fsGroup values from the securityContext section.
##
volumePermissions:
  ## @param volumePermissions.enabled Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work)
  ##
  enabled: true
  ## @param volumePermissions.image.registry Init container volume-permissions image registry
  ## @param volumePermissions.image.repository Init container volume-permissions image name
  ## @param volumePermissions.image.tag Init container volume-permissions image tag
  ## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy
  ## @param volumePermissions.image.pullSecrets Init container volume-permissions image pull secrets
  ##
  image:
    registry: docker.io
    repository: bitnami/bitnami-shell
    tag: 10-debian-10-r234
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## @param volumePermissions.resources Init container resource requests/limit
  ##
  resources: {}
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
## @param securityContext.enabled Enable security context
## @param securityContext.fsGroup Group ID for the container
## @param securityContext.runAsUser User ID for the container
## @param securityContext.runAsNonRoot Run containers as non-root users
##
securityContext:
  enabled: true
  fsGroup: 0
  runAsUser: 0
  runAsNonRoot: false
## Kubernetes service type
## ref: https://kubernetes.io/docs/concepts/services-networking/service/
##
service:
  ## @param service.name Specify an explicit service name
  ##
  name: ""
  ## @param service.annotations Additional service annotations (evaluate as a template)
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  annotations: {}
  ## @param service.type Service type
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  ##
  type: ClusterIP
  ## @param service.externalTrafficPolicy External traffic policy
  ## Enable client source IP preservation
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  ##
  externalTrafficPolicy: Cluster
  ## @param service.port MongoDB® service port
  ##
  port: 27017
  ## @param service.clusterIP Static clusterIP or None for headless services
  ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#servicespec-v1-core
  ##
  clusterIP: ""
  ## @param service.nodePort Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePort: ""
  ## @param service.externalIPs External IP list to use with ClusterIP service type
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
  ##
  externalIPs: []
  ## @param service.loadBalancerIP Static IP Address to use for LoadBalancer service type
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
  ##
  loadBalancerIP: ""
  ## @param service.loadBalancerSourceRanges List of IP ranges allowed access to load balancer (if supported)
  ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-svider-firewall/#restrict-access-for-loadbalancer-service
  ##
  loadBalancerSourceRanges: []
  ## @param service.extraPorts Extra ports to expose (normally used with the `sidecar` value)
  ##
  extraPorts: []
  ## @param service.sessionAffinity Session Affinity for Kubernetes service, can be "None" or "ClientIP"
  ## If "ClientIP", consecutive client requests will be directed to the same mongos Pod
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
  ##
  sessionAffinity: None
## Configure extra options for liveness probes
## This applies to all the MongoDB® in the sharded cluster
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
## @param livenessProbe.enabled Enable livenessProbe
## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## @param livenessProbe.periodSeconds Period seconds for livenessProbe
## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param livenessProbe.successThreshold Success threshold for livenessProbe
##
livenessProbe:
  enabled: false
  initialDelaySeconds: 120
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
## Configure extra options for readiness probe
## This applies to all the MongoDB® in the sharded cluster
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
## @param readinessProbe.enabled Enable readinessProbe
## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param readinessProbe.periodSeconds Period seconds for readinessProbe
## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe:
  enabled: false
  initialDelaySeconds: 120
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

## @section Config Server parameters

## Config Server replica set properties
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/
##
configsvr:
  ## @param configsvr.replicas Number of nodes in the replica set (the first node will be primary)
  ##
  replicas: 1
  ## @param configsvr.resources Configure pod resources
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: {}
  ## @param configsvr.hostAliases Deployment pod host aliases
  ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
  ##
  hostAliases: []
  ## @param configsvr.mongodbExtraFlags MongoDB® additional command line flags
  ## Can be used to specify command line flags, for example:
  ## mongodbExtraFlags:
  ##  - "--wiredTigerCacheSizeGB=2"
  ##
  mongodbExtraFlags: []
  ## @param configsvr.priorityClassName Pod priority class name
  ## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  ##
  priorityClassName: ""
  ## @param configsvr.podAffinityPreset Config Server Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAffinityPreset: ""
  ## @param configsvr.podAntiAffinityPreset Config Server Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAntiAffinityPreset: soft
  ## Node affinity preset
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  ##
  nodeAffinityPreset:
    ## @param configsvr.nodeAffinityPreset.type Config Server Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
    ##
    type: ""
    ## @param configsvr.nodeAffinityPreset.key Config Server Node label key to match Ignored if `affinity` is set.
    ## E.g.
    ## key: "kubernetes.io/e2e-az-name"
    ##
    key: ""
    ## @param configsvr.nodeAffinityPreset.values Config Server Node label values to match. Ignored if `affinity` is set.
    ## E.g.
    ## values:
    ##   - e2e-az1
    ##   - e2e-az2
    ##
    values: []
  ## @param configsvr.affinity Config Server Affinity for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## Note: configsvr.podAffinityPreset, configsvr.podAntiAffinityPreset, and configsvr.nodeAffinityPreset will be ignored when it's set
  ##
  affinity: {}
  ## @param configsvr.nodeSelector Config Server Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## @param configsvr.tolerations Config Server Tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []
  ## @param configsvr.podManagementPolicy Statefulset's pod management policy, allows parallel startup of pods
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
  ##
  podManagementPolicy: OrderedReady
  ## @param configsvr.updateStrategy.type updateStrategy for MongoDB® Primary, Secondary and Arbiter statefulsets
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
  ##
  updateStrategy:
    type: RollingUpdate
  ## @param configsvr.config MongoDB® configuration file
  ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
  ##
  config: ""
  ## @param configsvr.configCM ConfigMap name with Config Server configuration file (cannot be used with configsvr.config)
  ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
  ##
  configCM: ""
  ## @param configsvr.extraEnvVars An array to add extra env vars
  ## For example:
  ## extraEnvVars:
  ##  - name: KIBANA_ELASTICSEARCH_URL
  ##    value: test
  ##
  extraEnvVars: []
  ## @param configsvr.extraEnvVarsCM Name of a ConfigMap containing extra env vars
  ##
  extraEnvVarsCM: ""
  ## @param configsvr.extraEnvVarsSecret Name of a Secret containing extra env vars
  ##
  extraEnvVarsSecret: ""
  ## @param configsvr.sidecars Add sidecars to the pod
  ## For example:
  ## sidecars:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
  ##         containerPort: 1234
  ##
  sidecars: []
  ## @param configsvr.initContainers Add init containers to the pod
  ## For example:
  ## initcontainers:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##
  initContainers: []
  ## @param configsvr.podAnnotations Additional pod annotations
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: {}
  ## @param configsvr.podLabels Additional pod labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}
  ## @param configsvr.extraVolumes Array to add extra volumes. Requires setting `extraVolumeMounts`
  ##
  extraVolumes: []
  ## @param configsvr.extraVolumeMounts Array to add extra mounts (normally used with extraVolumes). Normally used with `extraVolumes`
  ##
  extraVolumeMounts: []
  ## @param configsvr.schedulerName Use an alternate scheduler, e.g. "stork".
  ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
  ##
  schedulerName: ""
  ## Pod disruption budget
  ##
  pdb:
    ## @param configsvr.pdb.enabled Enable pod disruption budget
    ##
    enabled: false
    ## @param configsvr.pdb.minAvailable Minimum number of available config pods allowed (`0` to disable)
    ##
    minAvailable: 0
    ## @param configsvr.pdb.maxUnavailable Maximum number of unavailable config pods allowed (`0` to disable)
    ##
    maxUnavailable: 1
  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param configsvr.persistence.enabled Use a PVC to persist data
    ##
    enabled: true
    ## @param configsvr.persistence.mountPath Path to mount the volume at
    ## MongoDB® images.
    ##
    mountPath: /bitnami/mongodb
    ## @param configsvr.persistence.subPath Subdirectory of the volume to mount at
    ## Useful in dev environments and one PV for multiple services.
    ##
    subPath: ""
    ## @param configsvr.persistence.storageClass Storage class of backing PVC
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""
    ## @param configsvr.persistence.accessModes Use volume as ReadOnly or ReadWrite
    ##
    accessModes:
      - ReadWriteOnce
    ## @param configsvr.persistence.size PersistentVolumeClaim size
    ##
    size: 8Gi
    ## @param configsvr.persistence.annotations Persistent Volume annotations
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    annotations: {}
  ## K8s Service Account.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  ##
  serviceAccount:
    ## @param configsvr.serviceAccount.create Specifies whether a ServiceAccount should be created for Config Server
    ##
    create: false
    ## @param configsvr.serviceAccount.name Name of a Service Account to be used by Config Server
    ## If not set and create is true, a name is generated using the XXX.fullname template
    ##
    name: ""
  ## Use a external config server instead of deploying one
  ##
  external:
    ## @param configsvr.external.host Primary node of an external Config Server replicaset
    ##
    host: "configsvr1.example.com"
    ## @param configsvr.external.rootPassword Root password of the external Config Server replicaset
    ##
    rootPassword: rootpassword
    ## @param configsvr.external.replicasetName Replicaset name of an external Config Server
    ##
    replicasetName: "cfgrs0"
    ## @param configsvr.external.replicasetKey Replicaset key of an external Config Server
    ##
    replicasetKey: |-
      YT9GpcsjddsjhdshhsjdahjssdflqzSyKK/lJs1Ly9jxFAjhdjjKjsjfhdkJFJBDCBXBCSjsh
      hQrnebqxrFSmHb+v2iKPs7D7mVDIrcPkyJYGxeDN8rEcxTmI7KOytmz3B6sASM7R
      VBH5K9SxA+fyZlGeK+WqlwpJppdJQ88jEo0QvSmOCHofgKK2Q4UeJmEfSXzaDmV8
      PJsU6TYSKUdc9a02nW47ki254HepYnMRfx9NQ3P/SPdI4IitdY7HQHkLC5WDR5+T
      Xa1fT/64s3FgI7Lub8mH6V1Oz16GXBLR0lyUGQJZbdO6fRpfm96MPhmTb9BDnRHW
      zFSIl/yuMwl8W+UBxX+EBWW8sajj8jhjhJJHJKUY89983QuWmGzsmD2wDcIEq92N
      PsdghjhsghsagfhUIYSAjhjHiuJBhjh7867687TEBeCkg2NYYQz2lwqGfzmiAX

## @section Mongos parameters

## Mongos properties
## ref: https://docs.mongodb.com/manual/reference/program/mongos/#bin.mongos
##
mongos:
  ## @param mongos.replicas Number of replicas
  ##
  replicas: 1
  ## @param mongos.resources Configure pod resources
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: {}
  ## @param mongos.hostAliases Deployment pod host aliases
  ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
  ##
  hostAliases: []
  ## @param mongos.mongodbExtraFlags MongoDB&reg; additional command line flags
  ## Can be used to specify command line flags, for example:
  ## mongodbExtraFlags:
  ##  - "--wiredTigerCacheSizeGB=2"
  ##
  mongodbExtraFlags: []
  ## @param mongos.priorityClassName Pod priority class name
  ## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  ##
  priorityClassName: ""
  ## @param mongos.podAffinityPreset Mongos Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAffinityPreset: ""
  ## @param mongos.podAntiAffinityPreset Mongos Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAntiAffinityPreset: soft
  ## Node affinity preset
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  ##
  nodeAffinityPreset:
    ## @param mongos.nodeAffinityPreset.type Mongos Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
    ##
    type: ""
    ## @param mongos.nodeAffinityPreset.key Mongos Node label key to match Ignored if `affinity` is set.
    ## E.g.
    ## key: "kubernetes.io/e2e-az-name"
    ##
    key: ""
    ## @param mongos.nodeAffinityPreset.values Mongos Node label values to match. Ignored if `affinity` is set.
    ## E.g.
    ## values:
    ##   - e2e-az1
    ##   - e2e-az2
    ##
    values: []
  ## @param mongos.affinity Mongos Affinity for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## Note: mongos.podAffinityPreset, mongos.podAntiAffinityPreset, and mongos.nodeAffinityPreset will be ignored when it's set
  ##
  affinity: {}
  ## @param mongos.nodeSelector Mongos Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## @param mongos.tolerations Mongos Tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []
  ## @param mongos.podManagementPolicy Statefulsets pod management policy, allows parallel startup of pods
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
  ##
  podManagementPolicy: OrderedReady
  ## @param mongos.updateStrategy.type updateStrategy for MongoDB&reg; Primary, Secondary and Arbiter statefulsets
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
  ##
  updateStrategy:
    type: RollingUpdate
  ## @param mongos.config MongoDB&reg; configuration file
  ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
  ##
  config: |-
    net:
      #bindIp: 0.0.0.0,127.0.0.1,app1.host.com,mongos.namespace1.svc.cluster.local
      bindIpAll: true
      port: 27017
      unixDomainSocket:
        enabled: true
        pathPrefix: /opt/bitnami/mongodb/tmp
    processManagement:
      fork: true
      pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
    security:
      keyFile: /opt/bitnami/mongodb/conf/keyfile
      clusterAuthMode: keyFile
    sharding:
      configDB: cfgrs0/configsvr0.example.com:27019,configsvr1.example.com:27019,configsvr2.example.com:27019
    systemLog:
      destination: file 
      path: /opt/bitnami/mongodb/logs/mongodb.log
      logAppend: true
  ## @param mongos.configCM ConfigMap name with MongoDB&reg; configuration file (cannot be used with mongos.config)
  ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
  ##
  configCM: ""
  ## @param mongos.extraEnvVars An array to add extra env vars
  ## For example:
  ## extraEnvVars:
  ##  - name: KIBANA_ELASTICSEARCH_URL
  ##    value: test
  ##
  extraEnvVars: 
    - name: MONGODB_CFG_PRIMARY_PORT_NUMBER
      value: "27019"
    - name: MONGODB_MOUNTED_CONF_DIR
      value: /bitnami/mongodb/conf
  ## @param mongos.extraEnvVarsCM Name of a ConfigMap containing extra env vars
  ##
  extraEnvVarsCM: ""
  ## @param mongos.extraEnvVarsSecret Name of a Secret containing extra env vars
  ##
  extraEnvVarsSecret: ""
  ## @param mongos.sidecars Add sidecars to the pod
  ## For example:
  ## sidecars:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
  ##         containerPort: 1234
  ##
  sidecars: []
  ## @param mongos.initContainers Add init containers to the pod
  ## For example:
  ## initcontainers:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##
  initContainers: []
  ## @param mongos.podAnnotations Additional pod annotations
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: {}
  ## @param mongos.podLabels Additional pod labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}
  ## @param mongos.extraVolumes Array to add extra volumes. Requires setting `extraVolumeMounts`
  ##
  extraVolumes: []
  ## @param mongos.extraVolumeMounts Array to add extra volume mounts. Normally used with `extraVolumes`.
  ##
  extraVolumeMounts: []
  ## @param mongos.schedulerName Use an alternate scheduler, e.g. "stork".
  ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
  ##
  schedulerName: ""
  ## @param mongos.useStatefulSet Use StatefulSet instead of Deployment
  ##
  useStatefulSet: false
  ## When using a statefulset, you can enable one service per replica
  ## This is useful when exposing the mongos through load balancers to make sure clients
  ## connect to the same mongos and therefore can follow their cursors
  ##
  servicePerReplica:
    ## @param mongos.servicePerReplica.enabled Create one service per mongos replica (must be used with statefulset)
    ##
    enabled: false
    ## @param mongos.servicePerReplica.annotations Additional service annotations (evaluate as a template)
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    annotations: {}
    ## @param mongos.servicePerReplica.type Service type
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
    ##
    type: ClusterIP
    ## @param mongos.servicePerReplica.externalTrafficPolicy External traffic policy
    ## Enable client source IP preservation
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
    ##
    externalTrafficPolicy: Cluster
    ## @param mongos.servicePerReplica.port MongoDB&reg; service port
    ##
    port: 27017
    ## @param mongos.servicePerReplica.clusterIP Static clusterIP or None for headless services
    ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#servicespec-v1-core
    ##
    clusterIP: ""
    ## @param mongos.servicePerReplica.nodePort Specify the nodePort value for the LoadBalancer and NodePort service types
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ##
    nodePort: ""
    ## @param mongos.servicePerReplica.externalIPs External IP list to use with ClusterIP service type
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
    ##
    externalIPs: []
    ## @param mongos.servicePerReplica.loadBalancerIP Static IP Address to use for LoadBalancer service type
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
    ##
    loadBalancerIP: ""
    ## @param mongos.servicePerReplica.loadBalancerSourceRanges List of IP ranges allowed access to load balancer (if supported)
    ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    loadBalancerSourceRanges: []
    ## @param mongos.servicePerReplica.extraPorts Extra ports to expose (normally used with the `sidecar` value)
    ##
    extraPorts: []
    ## @param mongos.servicePerReplica.sessionAffinity Session Affinity for Kubernetes service, can be "None" or "ClientIP"
    ## If "ClientIP", consecutive client requests will be directed to the same mongos Pod
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    ##
    sessionAffinity: None
  ## Pod disruption budget
  ##
  pdb:
    ## @param mongos.pdb.enabled Enable pod disruption budget
    ##
    enabled: false
    ## @param mongos.pdb.minAvailable Minimum number of available mongo pods allowed (`0` to disable)
    ##
    minAvailable: 0
    ## @param mongos.pdb.maxUnavailable Maximum number of unavailable mongo pods allowed (`0` to disable)
    ##
    maxUnavailable: 1
  ## K8s Service Account.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  ##
  serviceAccount:
    ## @param mongos.serviceAccount.create Whether to create a Service Account for mongos automatically
    ##
    create: false
    ## @param mongos.serviceAccount.name Name of a Service Account to be used by mongos
    ## If not set and create is true, a name is generated using the XXX.fullname template
    ##
    name: ""

## @section Shard configuration: Data node parameters

## Shard replica set properties
## ref: https://docs.mongodb.com/manual/replication/index.html
##
shardsvr:
  ## Properties for data nodes (primary and secondary)
  ##
  dataNode:
    ## @param shardsvr.dataNode.replicas Number of nodes in each shard replica set (the first node will be primary)
    ##
    #replicas: 1
    ## @param shardsvr.dataNode.resources Configure pod resources
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources: {}
    ## @param shardsvr.dataNode.mongodbExtraFlags MongoDB&reg; additional command line flags
    ## Can be used to specify command line flags, for example:
    ## mongodbExtraFlags:
    ##  - "--wiredTigerCacheSizeGB=2"
    ##
    mongodbExtraFlags: []
    ## @param shardsvr.dataNode.priorityClassName Pod priority class name
    ## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
    ## @param shardsvr.dataNode.podAffinityPreset Data nodes Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
    ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
    ##
    podAffinityPreset: ""
    ## @param shardsvr.dataNode.podAntiAffinityPreset Data nodes Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
    ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
    ##
    podAntiAffinityPreset: soft
    ## Node affinity preset
    ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
    ##
    nodeAffinityPreset:
      ## @param shardsvr.dataNode.nodeAffinityPreset.type Data nodes Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
      ##
      type: ""
      ## @param shardsvr.dataNode.nodeAffinityPreset.key Data nodes Node label key to match Ignored if `affinity` is set.
      ## E.g.
      ## key: "kubernetes.io/e2e-az-name"
      ##
      key: ""
      ## @param shardsvr.dataNode.nodeAffinityPreset.values Data nodes Node label values to match. Ignored if `affinity` is set.
      ## E.g.
      ## values:
      ##   - e2e-az1
      ##   - e2e-az2
      ##
      values: []
    ## @param shardsvr.dataNode.affinity Data nodes Affinity for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ## You can set dataNodeLoopId (or any other parameter) by setting the below code block under this 'affinity' section:
    ## affinity:
    ##   matchLabels:
    ##     shard: "{{ .dataNodeLoopId }}"
    ##
    ## Note: shardsvr.dataNode.podAffinityPreset, shardsvr.dataNode.podAntiAffinityPreset, and shardsvr.dataNode.nodeAffinityPreset will be ignored when it's set
    ##
    affinity: {}
    ## @param shardsvr.dataNode.nodeSelector Data nodes Node labels for pod assignment
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ## You can set dataNodeLoopId (or any other parameter) by setting the below code block under this 'nodeSelector' section:
    ## nodeSelector: { shardId: "{{ .dataNodeLoopId }}" }
    ##
    nodeSelector: {}
    ## @param shardsvr.dataNode.tolerations Data nodes Tolerations for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ## You can set dataNodeLoopId (or any other parameter) by setting the below code block under this 'nodeSelector' section:
    ## tolerations:
    ## - key: "shardId"
    ##   operator: "Equal"
    ##   value: "{{ .dataNodeLoopId }}"
    ##   effect: "NoSchedule"
    ##
    tolerations: []
    ## @param shardsvr.dataNode.podManagementPolicy podManagementPolicy for the statefulset, allows parallel startup of pods
    ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
    ##
    podManagementPolicy: OrderedReady
    ## @param shardsvr.dataNode.updateStrategy.type updateStrategy for MongoDB&reg; Primary, Secondary and Arbiter statefulsets
    ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
    ##
    updateStrategy:
      type: RollingUpdate
    ## @param shardsvr.dataNode.hostAliases Deployment pod host aliases
    ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
    ##
    hostAliases: []
    ## @param shardsvr.dataNode.config Entries for the MongoDB&reg; config file
    ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
    ##
    config: ""
    ## @param shardsvr.dataNode.configCM ConfigMap name with MongoDB&reg; configuration (cannot be used with shardsvr.dataNode.config)
    ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
    ##
    configCM: ""
    ## @param shardsvr.dataNode.extraEnvVars An array to add extra env vars
    ## For example:
    ## extraEnvVars:
    ##  - name: KIBANA_ELASTICSEARCH_URL
    ##    value: test
    ##
    extraEnvVars: []
    ## @param shardsvr.dataNode.extraEnvVarsCM Name of a ConfigMap containing extra env vars
    ##
    extraEnvVarsCM: ""
    ## @param shardsvr.dataNode.extraEnvVarsSecret Name of a Secret containing extra env vars
    ##
    extraEnvVarsSecret: ""
    ## @param shardsvr.dataNode.sidecars Attach additional containers (evaluated as a template)
    ## For example:
    ## sidecars:
    ##   - name: your-image-name
    ##     image: your-image
    ##     imagePullPolicy: Always
    ##     ports:
    ##       - name: portname
    ##         containerPort: 1234
    ##
    sidecars: []
    ## @param shardsvr.dataNode.initContainers Add init containers to the pod
    ## For example:
    ## initcontainers:
    ##   - name: your-image-name
    ##     image: your-image
    ##     imagePullPolicy: Always
    ##
    initContainers: []
    ## @param shardsvr.dataNode.podAnnotations Additional pod annotations
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    podAnnotations: {}
    ## @param shardsvr.dataNode.podLabels Additional pod labels
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
    ##
    podLabels: {}
    ## @param shardsvr.dataNode.extraVolumes Array to add extra volumes. Requires setting `extraVolumeMounts`
    ##
    extraVolumes: []
    ## @param shardsvr.dataNode.extraVolumeMounts Array to add extra mounts. Normally used with `extraVolumes`
    ##
    extraVolumeMounts: []
    ## @param shardsvr.dataNode.schedulerName Use an alternate scheduler, e.g. "stork".
    ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
    ##
    schedulerName: ""
    ## Pod disruption budget
    ##
    pdb:
      ## @param shardsvr.dataNode.pdb.enabled Enable pod disruption budget
      ##
      enabled: false
      ## @param shardsvr.dataNode.pdb.minAvailable Minimum number of available data pods allowed (`0` to disable)
      ##
      minAvailable: 0
      ## @param shardsvr.dataNode.pdb.maxUnavailable Maximum number of unavailable data pods allowed (`0` to disable)
      ##
      maxUnavailable: 1
    ## K8s Service Account.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
    ##
    serviceAccount:
      ## @param shardsvr.dataNode.serviceAccount.create Specifies whether a ServiceAccount should be created for shardsvr
      ##
      create: false
      ## @param shardsvr.dataNode.serviceAccount.name Name of a Service Account to be used by shardsvr data pods
      ## If not set and create is true, a name is generated using the XXX.fullname template
      ##
      name: ""

  ## @section Shard configuration: Persistence parameters

  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param shardsvr.persistence.enabled Use a PVC to persist data
    ##
    enabled: true
    ## @param shardsvr.persistence.mountPath The path the volume will be mounted at, useful when using different MongoDB&reg; images.
    ##
    mountPath: /bitnami/mongodb
    ## @param shardsvr.persistence.subPath Subdirectory of the volume to mount at
    ## Useful in development environments and one PV for multiple services.
    ##
    subPath: ""
    ## @param shardsvr.persistence.storageClass Storage class of backing PVC
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""
    ## @param shardsvr.persistence.accessModes Use volume as ReadOnly or ReadWrite
    ##
    accessModes:
      - ReadWriteOnce
    ## @param shardsvr.persistence.size PersistentVolumeClaim size
    ##
    size: 8Gi
    ## @param shardsvr.persistence.annotations Additional volume annotations
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    annotations: {}

  ## @section Shard configuration: Arbiter parameters

  ## Properties for arbiter nodes
  ## ref: https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
  ##
  arbiter:
    ## @param shardsvr.arbiter.replicas Number of arbiters in each shard replica set (the first node will be primary)
    ##
    #replicas: 0
    ## @param shardsvr.arbiter.hostAliases Deployment pod host aliases
    ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
    ##
    hostAliases: []
    ## @param shardsvr.arbiter.resources Configure pod resources
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources: {}
    ## @param shardsvr.arbiter.mongodbExtraFlags MongoDB&reg; additional command line flags
    ## Can be used to specify command line flags, for example:
    ## mongodbExtraFlags:
    ##  - "--wiredTigerCacheSizeGB=2"
    ##
    mongodbExtraFlags: []
    ## @param shardsvr.arbiter.priorityClassName Pod priority class name
    ## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
    ## @param shardsvr.arbiter.podAffinityPreset Arbiter's Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
    ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
    ##
    podAffinityPreset: ""
    ## @param shardsvr.arbiter.podAntiAffinityPreset Arbiter's Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
    ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
    ##
    podAntiAffinityPreset: soft
    ## Node affinity preset
    ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
    ##
    nodeAffinityPreset:
      ## @param shardsvr.arbiter.nodeAffinityPreset.type Arbiter's Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
      ##
      type: ""
      ## @param shardsvr.arbiter.nodeAffinityPreset.key Arbiter's Node label key to match Ignored if `affinity` is set.
      ## E.g.
      ## key: "kubernetes.io/e2e-az-name"
      ##
      key: ""
      ## @param shardsvr.arbiter.nodeAffinityPreset.values Arbiter's Node label values to match. Ignored if `affinity` is set.
      ## E.g.
      ## values:
      ##   - e2e-az1
      ##   - e2e-az2
      ##
      values: []
    ## @param shardsvr.arbiter.affinity Arbiter's Affinity for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ## You can set arbiterLoopId (or any other parameter) by setting the below code block under this 'affinity' section:
    ## affinity:
    ##   matchLabels:
    ##     shard: "{{ .arbiterLoopId }}"
    ##
    ## Note: shardsvr.arbiter.podAffinityPreset, shardsvr.arbiter.podAntiAffinityPreset, and shardsvr.arbiter.nodeAffinityPreset will be ignored when it's set
    ##
    affinity: {}
    ## @param shardsvr.arbiter.nodeSelector Arbiter's Node labels for pod assignment
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
    ## @param shardsvr.arbiter.tolerations Arbiter's Tolerations for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
    ## @param shardsvr.arbiter.podManagementPolicy Statefulset's pod management policy, allows parallel startup of pods
    ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
    ##
    podManagementPolicy: OrderedReady
    ## @param shardsvr.arbiter.updateStrategy.type updateStrategy for MongoDB&reg; Primary, Secondary and Arbiter statefulsets
    ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
    ##
    updateStrategy:
      type: RollingUpdate
    ## @param shardsvr.arbiter.config MongoDB&reg; configuration file
    ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
    ##
    config: ""
    ## @param shardsvr.arbiter.configCM ConfigMap name with MongoDB&reg; configuration file (cannot be used with shardsvr.arbiter.config)
    ## ref: http://docs.mongodb.org/manual/reference/configuration-options/
    ##
    configCM: ""
    ## @param shardsvr.arbiter.extraEnvVars An array to add extra env vars
    ## For example:
    ## extraEnvVars:
    ##  - name: KIBANA_ELASTICSEARCH_URL
    ##    value: test
    ##
    extraEnvVars: []
    ## @param shardsvr.arbiter.extraEnvVarsCM Name of a ConfigMap containing extra env vars
    ##
    extraEnvVarsCM: ""
    ## @param shardsvr.arbiter.extraEnvVarsSecret Name of a Secret containing extra env vars
    ##
    extraEnvVarsSecret: ""
    ## @param shardsvr.arbiter.sidecars Add sidecars to the pod
    ## For example:
    ## sidecars:
    ##   - name: your-image-name
    ##     image: your-image
    ##     imagePullPolicy: Always
    ##     ports:
    ##       - name: portname
    ##         containerPort: 1234
    ##
    sidecars: []
    ## @param shardsvr.arbiter.initContainers Add init containers to the pod
    ## For example:
    ## initcontainers:
    ##   - name: your-image-name
    ##     image: your-image
    ##     imagePullPolicy: Always
    ##
    initContainers: []
    ## @param shardsvr.arbiter.podAnnotations Additional pod annotations
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    podAnnotations: {}
    ## @param shardsvr.arbiter.podLabels Additional pod labels
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
    ##
    podLabels: {}
    ## @param shardsvr.arbiter.extraVolumes Array to add extra volumes
    ##
    extraVolumes: []
    ## @param shardsvr.arbiter.extraVolumeMounts Array to add extra mounts (normally used with extraVolumes)
    ##
    extraVolumeMounts: []
    ## @param shardsvr.arbiter.schedulerName Use an alternate scheduler, e.g. "stork".
    ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
    ##
    schedulerName: ""
    ## K8s Service Account.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
    ##
    serviceAccount:
      ## @param shardsvr.arbiter.serviceAccount.create Specifies whether a ServiceAccount should be created for shardsvr arbiter nodes
      ##
      create: false
      ## @param shardsvr.arbiter.serviceAccount.name Name of a Service Account to be used by shardsvr arbiter pods
      ## If not set and create is true, a name is generated using the XXX.fullname template
      ##
      name: ""

## @section Metrics parameters

metrics:
  ## @param metrics.enabled Start a side-car prometheus exporter
  ##
  enabled: false
  ## @param metrics.image.registry MongoDB&reg; exporter image registry
  ## @param metrics.image.repository MongoDB&reg; exporter image name
  ## @param metrics.image.tag MongoDB&reg; exporter image tag
  ## @param metrics.image.pullPolicy MongoDB&reg; exporter image pull policy
  ## @param metrics.image.pullSecrets MongoDB&reg; exporter image pull secrets
  ##
  image:
    registry: docker.io
    repository: bitnami/mongodb-exporter
    tag: 0.11.2-debian-10-r322
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## @param metrics.extraArgs String with extra arguments to the metrics exporter
  ## ref: https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go
  ##
  extraArgs: ""
  ## @param metrics.resources Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: {}
  ## Metrics exporter liveness probe
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ## @param metrics.livenessProbe.enabled Enable livenessProbe
  ## @param metrics.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
  ## @param metrics.livenessProbe.periodSeconds Period seconds for livenessProbe
  ## @param metrics.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
  ## @param metrics.livenessProbe.failureThreshold Failure threshold for livenessProbe
  ## @param metrics.livenessProbe.successThreshold Success threshold for livenessProbe
  ##
  livenessProbe:
    enabled: false
    initialDelaySeconds: 15
    periodSeconds: 5
    timeoutSeconds: 5
    failureThreshold: 3
    successThreshold: 1
  ## Metrics exporter liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ## @param metrics.readinessProbe.enabled Enable readinessProbe
  ## @param metrics.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
  ## @param metrics.readinessProbe.periodSeconds Period seconds for readinessProbe
  ## @param metrics.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
  ## @param metrics.readinessProbe.failureThreshold Failure threshold for readinessProbe
  ## @param metrics.readinessProbe.successThreshold Success threshold for readinessProbe
  ##
  readinessProbe:
    enabled: false
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1
  ## @param metrics.containerPort Port of the Prometheus metrics container
  ##
  containerPort: 9216
  ## @param metrics.podAnnotations [object] Metrics exporter pod Annotation
  ##
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "{{ .Values.metrics.containerPort }}"
  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
  ##
  podMonitor:
    ## @param metrics.podMonitor.enabled Create PodMonitor Resource for scraping metrics using PrometheusOperator
    ##
    enabled: false
    ## @param metrics.podMonitor.namespace Namespace where podmonitor resource should be created
    ##
    namespace: monitoring
    ## @param metrics.podMonitor.interval Specify the interval at which metrics should be scraped
    ##
    interval: 30s
    ## @param metrics.podMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
    ## e.g:
    ## scrapeTimeout: 30s
    ##
    scrapeTimeout: ""
    ## @param metrics.podMonitor.additionalLabels Additional labels that can be used so PodMonitors will be discovered by Prometheus
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    ##
    additionalLabels: {}

Reason for adding below variable is to copy the config to /opt/bitnami/mongodb/conf/mongos.conf, mongodb.conf otherwise it is creating conf with default values and it is taking configsvr port as 27017 as default where in our case the config server run at port 27019.

extraEnvVars: 
    - name: MONGODB_CFG_PRIMARY_PORT_NUMBER
      value: "27019"
    - name: MONGODB_MOUNTED_CONF_DIR
      value: /bitnami/mongodb/conf

When i looked into the container pod, the provided 'config' details are copied to mongodb.conf and mongos.conf from mount path /bitnami/mongodb/conf, From the pod logs i believe it is establishing connection to config server but when my app connect to mongos service i get 'connection refused' error from application end.

ps -ef | grep mongo

root      1055    14  0 09:35 pts/0    00:00:00 /bin/bash /opt/bitnami/scripts/mongodb-sharded/entrypoint.sh /opt/bitnami/scripts/mongodb-sharded/run.sh
root      1062  1055  0 09:35 pts/0    00:00:00 /bin/bash /opt/bitnami/scripts/mongodb-sharded/setup.sh
root      1118  1052  0 09:35 pts/1    00:00:00 grep mongo

I tried by enabling diagnosticMode and verified logs still no use.Not clear what am i missing. Please help me in fixing it.

To Reproduce helm -upgrade install mongos --namespace namespace1 mongodb-sharded

Expected behavior Mongos pid should be created /opt/bitnami/mongodb/tmp/mongodb.pid and application is able to talk to mongos.

Version of Helm and Kubernetes:

version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.16"}
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.13-eks-8df270", GitCommit:"8df2700a72a2598fa3a67c05126fa158fd839620", GitTreeState:"clean", BuildDate:"2021-07-31T01:36:57Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}

Additional context Tried to run container as root user also to check the listening ports and observed port 27017 is not available for access from application pod.

yilmi commented 2 years ago

Hi @sudheersagi,

We do manage the configuration through environment variables as described in the mongodb-sharded container repository.

When you override the configuration files mountpoint, you break some of our logic and you should make sure you understand how that impacts the container.

You can find more information in this repository:

If you use only our values, you should be able to get the same result without overriding the configuration mount point.

Could you confirm you are able to connect to configsvr1.example.com? How can the pod resolve this host name?

I'm happy to help further, but I think there are a couple of things that should be reviewed to make sure the problem is indeed coming from the chart.

sudheersagi commented 2 years ago

Thank you so much @yilmi for helping me here.

Could you confirm you are able to connect to configsvr1.example.com? How can the pod resolve this host name?

I tried to ran below command from the pod where mongos is setting up

mongos --configdb cfgrs0/configsvr0.example.com:27019 --port 27017 --clusterAuthMode keyFile --keyFile /opt/bitnami/mongodb/conf/keyfile

Here are the logs

{"t":{"$date":"2021-11-01T11:34:34.507Z"},"s":"W",  "c":"SHARDING", "id":24132,   "ctx":"main","msg":"Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production."}
{"t":{"$date":"2021-11-01T11:34:34.508+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-11-01T11:34:34.509+00:00"},"s":"W",  "c":"CONTROL",  "id":22138,   "ctx":"main","msg":"You are running this process as the root user, which is not recommended","tags":["startupWarnings"]}
{"t":{"$date":"2021-11-01T11:34:34.509+00:00"},"s":"W",  "c":"CONTROL",  "id":22140,   "ctx":"main","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]}
{"t":{"$date":"2021-11-01T11:34:34.574+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"mongosMain","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.10","gitVersion":"58971da1ef93435a9f62bf4708a81713def6e88c","openSSLVersion":"OpenSSL 1.1.1d  10 Sep 2019","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2021-11-01T11:34:34.574+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"mongosMain","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 5.4.149-73.259.amzn2.x86_64"}}}
{"t":{"$date":"2021-11-01T11:34:34.574+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"mongosMain","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"security":{"clusterAuthMode":"keyFile","keyFile":"/opt/bitnami/mongodb/conf/keyfile"},"sharding":{"configDB":"cfgrs0/configsvr0.example.com:27019"}}}}
{"t":{"$date":"2021-11-01T11:34:34.574+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"mongosMain","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2021-11-01T11:34:34.575+00:00"},"s":"I",  "c":"NETWORK",  "id":4603701, "ctx":"mongosMain","msg":"Starting Replica Set Monitor","attr":{"protocol":"streamable","uri":"cfgrs0/configsvr0.example.com:27019"}}
{"t":{"$date":"2021-11-01T11:34:34.575+00:00"},"s":"I",  "c":"-",        "id":4333223, "ctx":"mongosMain","msg":"RSM now monitoring replica set","attr":{"replicaSet":"cfgrs0","nReplicaSetMembers":1}}
{"t":{"$date":"2021-11-01T11:34:34.575+00:00"},"s":"I",  "c":"-",        "id":4333226, "ctx":"mongosMain","msg":"RSM host was added to the topology","attr":{"replicaSet":"cfgrs0","host":"configsvr0.example.com:27019"}}
{"t":{"$date":"2021-11-01T11:34:34.575+00:00"},"s":"I",  "c":"CONNPOOL", "id":22576,   "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Connecting","attr":{"hostAndPort":"configsvr0.example.com:27019"}}
{"t":{"$date":"2021-11-01T11:34:34.575+00:00"},"s":"I",  "c":"SHARDING", "id":22649,   "ctx":"thread1","msg":"Creating distributed lock ping thread","attr":{"processId":"mongos-mongos-848f6d7cd7-ftc7f:27017:1635766474:6736574883495560854","pingIntervalMillis":30000}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"NETWORK",  "id":23729,   "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"ServerPingMonitor is now monitoring host","attr":{"host":"configsvr0.example.com:27019","replicaSet":"cfgrs0"}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"NETWORK",  "id":4333213, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM Topology Change","attr":{"replicaSet":"cfgrs0","newTopologyDescription":"{ id: \"569a2ee3-f369-4c97-8df9-fefa1e26a98e\", topologyType: \"ReplicaSetNoPrimary\", servers: { configsvr0.example.com:27019: { address: \"configsvr0.example.com:27019\", topologyVersion: { processId: ObjectId('6166610c90e7cec7b72108a5'), counter: 4 }, roundTripTime: 633, lastWriteDate: new Date(1635766473000), opTime: { ts: Timestamp(1635766473, 3), t: 1 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 9, me: \"configsvr0.example.com:27019\", setName: \"cfgrs0\", setVersion: 4, primary: \"configsvr1.example.com:27019\", lastUpdateTime: new Date(1635766474581), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"configsvr1.example.com:27019\", 1: \"configsvr2.example.com:27019\", 2: \"configsvr0.example.com:27019\" }, arbiters: {}, passives: {} }, configsvr1.example.com:27019: { address: \"configsvr1.example.com:27019\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, configsvr2.example.com:27019: { address: \"configsvr2.example.com:27019\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \"cfgrs0\", compatible: true }","previousTopologyDescription":"{ id: \"b5e13d07-fc75-4dbc-90e3-2816e624de3f\", topologyType: \"ReplicaSetNoPrimary\", servers: { configsvr0.example.com:27019: { address: \"configsvr0.example.com:27019\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, setName: \"cfgrs0\", compatible: true }"}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"-",        "id":4333226, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM host was added to the topology","attr":{"replicaSet":"cfgrs0","host":"configsvr1.example.com:27019"}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"-",        "id":4333226, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM host was added to the topology","attr":{"replicaSet":"cfgrs0","host":"configsvr2.example.com:27019"}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"-",        "id":4333218, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Rescheduling the next replica set monitoring request","attr":{"replicaSet":"cfgrs0","host":"configsvr2.example.com:27019","delayMillis":0}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"-",        "id":4333218, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Rescheduling the next replica set monitoring request","attr":{"replicaSet":"cfgrs0","host":"configsvr1.example.com:27019","delayMillis":0}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"CONNPOOL", "id":22576,   "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Connecting","attr":{"hostAndPort":"configsvr2.example.com:27019"}}
{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I",  "c":"CONNPOOL", "id":22576,   "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Connecting","attr":{"hostAndPort":"configsvr1.example.com:27019"}}
{"t":{"$date":"2021-11-01T11:34:34.649+00:00"},"s":"I",  "c":"SHARDING", "id":22792,   "ctx":"ShardRegistry","msg":"Term advanced for config server","attr":{"opTime":{"ts":{"$timestamp":{"t":1635766473,"i":3}},"t":1},"prevOpTime":{"ts":{"$timestamp":{"t":0,"i":0}},"t":-1},"reason":"reply from config server node","clientAddress":"(unknown)"}}
{"t":{"$date":"2021-11-01T11:34:34.649+00:00"},"s":"I",  "c":"NETWORK",  "id":4603701, "ctx":"shard-registry-reload","msg":"Starting Replica Set Monitor","attr":{"protocol":"streamable","uri":"shardrs1/shardsvr1-1.example.com:27018,shardsvr1-2.example.com:27018,shardsvr1-3.example.com:27018"}}

Regarding overrides, initially i tried to use your values without any overrides but the config are not copied, and i see errors like confsvr host (127.0.0.1) unknown so after MONGODB_MOUNTED_CONFDIR overriding i see custom config gets copied to /opt/bitnami/mongodb/conf, from mounted path /bitnami/mongodb/conf_

Sorry, if my observation is wrong. Also I'm going with same deployment without any changes in volume mount.

i think by default MONGODB_MOUNTED_CONFDIR setting with /bitnami/conf where in deployment yaml the conf is mounted to /bitnami/mongodb/conf_

- name: config
   mountPath: /bitnami/mongodb/conf/
yilmi commented 2 years ago

Thanks for the additional information here. Ok, regarding this issue you mentioned below:

Regarding overrides, initially i tried to use your values without any overrides but the config are not copied, and i see errors like confsvr host (127.0.0.1) unknown so after MONGODB_MOUNTED_CONF_DIR overriding i see custom config gets copied to /opt/bitnami/mongodb/conf, from mounted path /bitnami/mongodb/conf

Please check if there are any pvc's left after uninstalling your chart as helm won't delete them for you. This would be an issue, as when the container starts, it will check if some files exist already, and if so will skip some configuration steps. So that's probably what happened (happens quite often in our issues).

Regarding the connectivity issue you report, have you noticed the following log line?

{"t":{"$date":"2021-11-01T11:34:34.581+00:00"},"s":"I", "c":"CONNPOOL", "id":22576, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Connecting","attr":{"hostAndPort":"configsvr1.example.com:27019"}} {"t":{"$date":"2021-11-01T11:34:34.649+00:00"},"s":"I", "c":"SHARDING", "id":22792, "ctx":"ShardRegistry","msg":"Term advanced for config server","attr":{"opTime":{"ts":{"$timestamp":{"t":1635766473,"i":3}},"t":1},"prevOpTime":{"ts":{"$timestamp":{"t":0,"i":0}},"t":-1},"reason":"reply from config server node","clientAddress":"(unknown)"}}

The second line mentions "clientAddress" as "unkown" which could be the cause of your issues here. Could you just try performing a curl -v https://configsvr0.example.com:27019 or similar (perhaps with http)?

This should at least give you some ideas about whether or not the TCP handshake is going through, and if you're using TLS, if the Authentication and Key Exchange parts are also successful.

sudheersagi commented 2 years ago

Thanks for pointing this @yilmi

Here is the curl output

curl -v https://configsvr0.example.com:27019

 Expire in 0 ms for 6 (transfer 0x55cf28e45fb0)
* Expire in 1 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 0 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 1 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 0 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 0 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 1 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 0 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 0 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 1 ms for 1 (transfer 0x55cf28e45fb0)
...
...
...
* Expire in 9 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 9 ms for 1 (transfer 0x55cf28e45fb0)
* Expire in 12 ms for 1 (transfer 0x55cf28e45fb0)
*   Trying 10.xxx.8x.2x...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55cf28e45fb0)
* Connected to configsvr0.example.com (10.xxx.8x.2x) port 27019 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to configsvr0.example.com:27019 
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to configsvr0.example.com:27019

Regarding overrides, i read about this pvc's thing in one of issue comments and i just make sure to not have any PVC's left in namespace,Once again i will cross verify it. We have pvc's in different k8s namespace(test env) within the same cluster and i hope that doesn't causes any issue.

sudheersagi commented 2 years ago

Hi @yilmi , we are not using any SSL on config server both are running within same network, just connecting using keyFile /opt/bitnami/mongodb/conf/keyfile Also is there anything in values.yaml to control the non-secure connection from mongos to configsvr.

"clientAddress" as "unkown"

It would be great if you can add more details around it.

yilmi commented 2 years ago

Hi @sudheersagi, I'm not trying to give you precise troubleshooting steps, I'm just trying to give you some pointers.

We are not MongoDB developers, and right now your problem seems more related to how MongoDB works than our chart itself. It seems that the Config Server also needs connectivity with the mongos - https://dba.stackexchange.com/a/201698

On our end, the values provide support for defining the config server, please reopen this issue if you have an issue with the chart.

sudheersagi commented 2 years ago

Hi @yilmi

I believe this is for sure not a problem from the MONGODB side but from the bitnami chart configuration itself because, when we tried to overwrite the /opt/bitnami/mongodb/conf/mongos.conf with the correct port configuration & fed it to the mongodb startup scripts, the MONGODB connection succeeded.

We are looking forward for a couple of mins of your time (may be a quick zoom call) to talk about our suspicion on the 'PORT variables overwrite not happening'.

(https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded#using-an-external-config-server)

#######################################

More details are described below:

I'm able to connect to Mongo configsvr from the POD without any issue if we run manually. What i was trying to tell you is after installed the chart, as soon the container is up and running it should create mongos process and and listen on port(here it is 27017) for any incoming requests, which is not happening because I dont see any pid inside /opt/bitnami/mongodb/tmp/. Correct me if this is not the expected behaviour of chart.

Just to make sure there are no existing PVCs

kubectl get pvc --namespace namespace1
No resources found in namespace1 namespace.

I followed the below steps to make sure there is not connectivity issue with MongoDB Config Server.

Step1: Installed the chart with above values.yaml file. Step2: Exec to POD and verified conf files with custom changes

  - /bitnami/mongodb/conf/mongodb.conf  -- contains custom config as given in 'config' variable from yaml
  - /opt/bitnami/mongodb/conf/keyfile   -- Copied from 'replicasetKey' variable
  - /opt/bitnami/mongodb/conf/mongodb.conf -- same as in 'config' variable
  - /opt/bitnami/mongodb/conf/mongos.conf -- same as in 'config' variable
  - /opt/bitnami/mongodb/tmp/   -- process id is missing, assuming mongos not started

Step3: Perform command mongos -f Terminal Output:

mongos -f /opt/bitnami/mongodb/conf/mongos.conf about to fork child process, waiting until server is ready for connections. forked process: 747 child process started successfully, parent exiting

Step4: verify the process id, now i see pid is created which is supposed to be created during container startup itself.

/opt/bitnami/mongodb/tmp# ls mongodb-27017.sock mongodb.pid

Step5: Connect (external)configsvr/shrdsvr from mongos running pod.In step4 we see mongos process is started

$mongo --host 127.0.0.1 --port 27017 --authenticationDatabase "admin" -u "user" -p "passwd" MongoDB shell version v4.4.10 connecting to: mongodb://127.0.0.1:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("066431fe-028a-453e-b689-f703a5fc781c") } MongoDB server version: 4.4.10 mongos>

FYI, Step2 to Step5 were performed within mongos pod. Step6: Tried to connect from application and verify the data from MongoDB.

logger:org.mongodb.driver.connection message:Opened connection [connectionId{localValue:13, serverValue:85}] to mongodb.namespace1.svc.cluster.local:27017

From the above steps, i believe there is no connectivity issue with Mongo configsvr that is running externally and i think the mongos process should be created when the contianer up and running (if this is the expected behaiour of this chart).

I did grep on process that are running when POD is available

# ps -ef

root         1     0  0 17:57 ?        00:00:00 /bin/bash /opt/bitnami/scripts/mongodb-sharded/entrypoint.sh /opt/bitnami/scripts/mongodb-sharded/run.sh
root        13     1  0 17:57 ?        00:00:00 /bin/bash /opt/bitnami/scripts/mongodb-sharded/setup.sh

I also tried by launching separate instance(within network outside cluster) and manually installed mongodb binary with same mongos conf file and able to connect our centralised mongo configsvr without any issues. We still see

"clientAddress":"(unknown)"

in the log but able to connect mongo. So i guess this is not actual error and it is ignorable