bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

Allowing option for SSL/TLS as a standard option in the bitnami/mongodb chart #3365

Closed kwill4026 closed 3 years ago

kwill4026 commented 4 years ago

Which chart: bitnami/mongodb

Is your feature request related to a problem? Please describe. In the day and age we are in, TLS support should not be an afterthought and instead the default. I would like if TLS support could be an option in bitnami/mongodb chart using self signed certs at the minimum.

Describe the solution you'd like The bitnami/mongodb chart uses the bitnami /bitnami-docker-mongodb [https://github.com/bitnami/bitnami-docker-mongodb] as the base MongoDB deployment to run on Kubernetes using Helm package manager and has the ability to incorporate TLS. When using the bitnami /bitnami-docker-mongodb with docker or docker-compose, enabling TLS works fine with no problem. The issue comes when attempting to incorporate it with Kubernetes and the helm chart. The way the bitnami/mongodb chart works is when you set the replicaset count to a certain number, say 3, the chart will spin up 3 pods ( 1 master and 2 secondaries). What I would like to see is each pod contain its own mongodb.pem file with the same signed CA. I've tried to get the chart to do all this automatically upon the user performing a helm install of the chart. In such a way that the CA cert and server cert gets created automatically but it seems I will have to manually create the CA cert and its key, put it in a volume mount and create a secret for it, use those certs to generate the server cert and also create a volume for it and secret for it. Since I will be adding extra parameters to the create the server cert I will request the CSR to be generated and sign it using the ca.crt and ca.key. In the end the mongo cert and mongo key I generated will get concatenated to give me the pem file (mongodb.pem) to access the cluster.

Describe alternatives you've considered What I have done so far is generate my own TLS Certificate Authority. In the values-production.yaml file I have included a section to enable TLS.

tls:
  enabled: true 
  mode: requireTLS
  cacert: <paste the CA cert here>
  cakey: <paste the CA key here>

I generated the CA key and public CA cert manually and place the output in the section above:

#generating the CA
openssl genrsa -out ca.key 2048 
openssl req -x509 -new -nodes -key ca.key -days 3650 -out ca.crt -subj "/CN=Certificate Authority"

I then created the CSR for the nodes. I created a volume and volume mount called "certificates". From the values-production.yaml file, my manually created root CA cert and key will be created as a secret that will be mounted on /certificates. A CSR request for each node (in the replicaset) will be generated in which I will create a cert from that CSR and sign with the ca.crt and ca.key. The private key and the public certificate will be concatenated to output the mongodb.pem file. If I decided to request 3 replicas to be spun up on installing the chart, I would expect each of the replicas to contain a /certificates directory with a mongodb.pem file. In order to get the mongodb.pem generated in each replica node, I decided to create an init script to do this. So basically upon a helm install of the chart the init script will do its job and generate the final product in the /certificates directory.

initdbScripts:
  my_init_script.sh: |
  #!/bin/bash
   my_hostname=$(hostname -f)
   ca_crt=/certificates/ca.crt
   if [ -f "$ca_crt" ]; then
       log "Located CA cert file, will generate certificate"
   ca_key=/certificates/ca.key
   pem=/certificates/mongodb.pem
   pushd /certificates
   cat >openssl.cnf <<EOL
   [req]
   req_extensions = v3_req
   distinguished_name = req_distinguished_name
   [req_distinguished_name]
   [v3_req]
   basicConstraints = CA:FALSE
   keyUsage = nonRepudiation, digitalSignature, keyEncipherment
   subjectAltName = @alt_names
   [alt_names]
   DNS.1 = $HOSTNAME
   DNS.2 = $MONGODB_ADVERTISED_HOSTNAME
   DNS.3 = localhost
   DNS.4 = 127.0.0.1
   EOL

   export RANDFILE=.rnd && openssl genrsa -out mongo.key 2048
   openssl req -new -key mongo.key -out mongo.csr -subj "/C=US/O=U.S. Testing/OU=IT/CN=$HOSTNAME" -config openssl.cnf
   openssl x509 -req -in mongo.csr -CA "$ca_crt" -CAkey "$ca_key" -CAcreateserial -out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
   rm mongo.csr
   cat mongo.crt mongo.key > $pem
   chmod 755 $pem
   rm mongo.key mongo.crt
   fi

The ca.crt and ca.key file created earlier gets called from the values-production file and is added as a secret in the secrets.yaml:

{{- if (include "mongodb.createSecret" .) }}
apiVersion: v1
kind: Secret
metadata:
  name: {{ include "mongodb.fullname" . }}
  namespace: {{ template "mongodb.namespace" . }}
  labels: {{- include "common.labels.standard" . | nindent 4 }}
    app.kubernetes.io/component: mongodb
type: Opaque
data:
  {{- if .Values.auth.rootPassword }}
  mongodb-root-password:  {{ .Values.auth.rootPassword | b64enc | quote }}
  {{- else }}
  mongodb-root-password: {{ randAlphaNum 10 | b64enc | quote }}
  {{- end }}
  {{- if and .Values.auth.username .Values.auth.database }}
  {{- if .Values.auth.password }}
  mongodb-password:  {{ .Values.auth.password | b64enc | quote }}
  {{- else }}
  mongodb-password: {{ randAlphaNum 10 | b64enc | quote }}
  {{- end }}
  {{- end }}
  {{- if eq .Values.architecture "replicaset" }}
  {{- if .Values.auth.replicaSetKey }}
  mongodb-replica-set-key:  {{ .Values.auth.replicaSetKey | b64enc | quote }}
  {{- else }}
  mongodb-replica-set-key: {{ randAlphaNum 10 | b64enc | quote }}
  {{- end }}
  {{- end }}
  ca.crt: {{ .Values.tls.cacert | b64enc | quote }}
  ca.key : {{ .Values.tls.cakey | b64enc | quote}}
{{- end }}

The ca.crt and ca.key upon a helm install gets installed into the /certificates directory from the secret.

        volumeMounts:
            - name: certificates
              mountPath: /certificates
volumes:
    - name: certificates
      secret:
         secretName: {{ include "mongodb.secretName" . }}
         items:
         - key: ca.crt
           path: ca.crt
           mode: 511
         - key: ca.key
           path: ca.key
           mode: 511

Now I want the init script containing our mongodb.pem generation to automatically read the /certificates directory, see the ca.crt/ca.key and perform the install into /certificates. However when doing this the pod spits out an error:

mongodb 13:48:23.04 Located CA cert file, will generate certificate
/certificates /
/docker-entrypoint-initdb.d/..2020_08_07_13_48_01.629505613/my_init_script.sh: line 9: openssl.cnf: Read-only file system
mongodb 13:48:23.05 INFO  ==> Stopping MongoDB...

Remember in my init script for the server cert creation I am moving into the /certificates directory to create the openssl.conf file

 pushd /certificates
   cat >openssl.cnf <<EOL
   ...
   ...  
  EOL

I am not sure how to get around this error because if I choose any other directory I get /docker-entrypoint-initdb.d/..2020_08_07_13_54_33.116378537/my_init_script.sh: line 9: openssl.cnf: Permission denied

I decided to utilize an initContainer to get around this. Not sure if its the correct way but it worked. Basically I changed the volume named certificates which contains the ca.crt and ca.key secret to a volume named certs-volume. I created another volume named certificates with an emptyDir instead. The plan here is to create an initContainer which simply copies the contents from the cert-volume to the certificates volume. And within the initContainer will be a volumeMount for both certs-volume and certificates. It will look like this:

initContainers:

- name: copy-config
          image: {{ include "mongodb.volumePermissions.image" . }}
          imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
          command:
            - /bin/bash
            - -ec
          args:
            - |
              cp "/certificates/CAs/ca.crt" "/certificates/CAs/ca.key" "/certificates"
          volumeMounts:
            - name: certificates
              mountPath: /certificates
            - name: certs-volume
              mountPath: /certificates/CAs
          volumeMount:
                  - name: certificates
                    mountPath: /certificates

    volumes: 
        - name: certificates
          emptyDir: {}
        - name: certs-volume
          secret:
            secretName: {{ include "mongodb.secretName" . }}
            items:
            - key: ca.crt
              path: ca.crt
              mode: 511
            - key: ca.key
              path: ca.key
              mode: 511

Now when I do a helm install of the chart, I am able to see the mongodb.pem created in the /certificates directory for each replicaset pod. This is good.

I have no name!@test-mongodb-0:/$ ls
bin  bitnami  boot  certificates  dev  docker-entrypoint-initdb.d  etc  home  lib  lib64  media  mnt  opt  pem-dir  proc  root  run  sbin  scripts  srv  sys  tmp  usr  var
I have no name!@test-mongodb-0:/$ cd certificates/
I have no name!@test-mongodb-0:/certificates$ ls -la
total 12
drwxrwsrwx 3 root 1001   64 Aug  7 17:03 .
drwxr-xr-x 1 root root  120 Aug  7 17:03 ..
drwxr-sr-x 2 root 1001    6 Aug  7 17:03 CAs
-rwxr-xr-x 1 root 1001 1086 Aug  7 17:03 ca.crt
-rwxr-xr-x 1 root 1001 1685 Aug  7 17:03 ca.key
-rwxr-xr-x 1 1001 1001 2737 Aug  7 17:03 mongodb.pem

Now using the extraFlags field in the values-production.yaml, I plan on enabling SSL/TLS by specifying the correct settings for the mongod to start up with TLS.

extraFlags:
    - "--tlsMode=preferTLS"
     - "--tlsCertificateKeyFile=/certificates/mongodb.pem"
     - "--tlsCAFile=/certificates/ca.crt"

I expect the mongod to be configured now to start up with tls by recognizing the ca.crt file which is the same in each replicaset and also recognize the mongodb.pem which is different in each pod but has the same name. However upon installing the chart the pod crashes and displays the following error:

[xxx@ip-10-111-25-255 mongodb_cp]$ kubectl logs test-mongodb-0
Advertised Hostname: test-mongodb-0.test-mongodb-headless.default.svc.cluster.local
Pod name matches initial primary pod name, configuring node as a primary
mongodb 17:14:57.93 
mongodb 17:14:57.93 Welcome to the Bitnami mongodb container
mongodb 17:14:57.93 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb 17:14:57.93 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb 17:14:57.93 
mongodb 17:14:57.93 INFO  ==> ** Starting MongoDB setup **
mongodb 17:14:57.95 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 17:14:57.96 INFO  ==> Initializing MongoDB...
mongodb 17:14:57.97 INFO  ==> Deploying MongoDB from scratch...
mongodb 17:14:57.97 DEBUG ==> Starting MongoDB in background...
about to fork child process, waiting until server is ready for connections.
forked process: 40
ERROR: child process failed, exited with error number 1
To see additional information in this output, start without the "--fork" option.

So I did just that. Rebuilt the bitnami /bitnami-docker-mongodb docker image WITHOUT the --fork. Now my error was more specific as I could see this:

Advertised Hostname: test-mongodb-0.test-mongodb-headless.default.svc.cluster.local
Pod name matches initial primary pod name, configuring node as a primary
mongodb 17:19:34.82 
mongodb 17:19:34.82 Welcome to the Bitnami mongodb container
mongodb 17:19:34.83 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb 17:19:34.83 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb 17:19:34.83 
mongodb 17:19:34.83 INFO  ==> ** Starting MongoDB setup **
mongodb 17:19:34.84 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 17:19:34.85 INFO  ==> Initializing MongoDB...
mongodb 17:19:34.86 INFO  ==> Deploying MongoDB from scratch...
mongodb 17:19:34.87 DEBUG ==> Starting MongoDB in background...
2020-08-07T17:19:34.890+0000 I  CONTROL  [main] ***** SERVER RESTARTED *****
2020-08-07T17:19:34.892+0000 E  NETWORK  [main] cannot read certificate file: /certificates/mongodb.pem error:02001002:system library:fopen:No such file or directory
2020-08-07T17:19:34.892+0000 F  CONTROL  [main] Failed global initialization: InvalidSSLConfiguration: Can not set up PEM key file.

It seems as if when the helm chart gets installed, its not recognizing the mongodb.pem file in /certificates. Even though I see it get created in /certificates, it does not recognize it. My belief is that it does not recognize the user that the mongodb.pem file gets created as. Can someone take a look at this and see what I am missing. If needed I can send you the source files directly of my chart.

seguidor777 commented 4 years ago

Hi,

I would like to see this feature as well. I am currently handling the certificates within a secret, also I have a volume and a volume mount, finally I have a confgmap for specifying the TLS configuration.

mongo.conf

net:
  tls:
    mode: "requireTLS"    
    CAFile: "/certificates/ca.pem"
    certificateKeyFile: "/certificates/mongodb-combined.pem"

When I try to install the chart, both pods (statefulset pod and arbiter) show the message Deploying MongoDB from scratch... and crash. Please let me know any advance on this feature or any workaround that you know.

dani8art commented 4 years ago

Hi all, thanks for opening this issue

Read-only file system

It happens because volumes which are mount either from Secrets or configMap are mounted as Read-only file system.

its not recognizing the mongodb.pem file in /certificates

could you try keeping them separate? As it is said in the official mongod docs. I mean, not to concatenate both key and crt

SEE ALSO

You can also configure mongod and mongos using command-line options instead of the configuration file:

For mongod, see --tlsMode, --tlsCertificateKeyFile, and --tlsCAFile.
kwill4026 commented 4 years ago

@dani8art that would be impossible to keep separate correct? From reading all the docs on signed certs and the mongo docs on --tlsCertifcateKeyFile, you have to concatenation the cert and key. It’s even in the description that it “specifies the .pem file that contains both the TLS certificate and key”.


### Setting                                                             ### Notes
net.tls.certificateKeyFile                                       Set to the path of the file that contains the **TLS/SSL certificate and key**.
--tlsCertificateKeyFile
                                                                                 The mongod/mongos instance presents this file to its client to establish the 
                                                                                  identity's instance. 

Every doc I see on these self signed certs always concatenates the cert and key. If I did not and just left the mongo.crt and mongo.key how could I even specify that in--tlsCertificateKeyFile when it is looking for a single file name?

dani8art commented 4 years ago

Hi @kwill4026, we can also try this https://github.com/bitnami/bitnami-docker-mongodb#enabling-ssltls.

using chart options like:

extraEnvVars:
  - name: MONGODB_EXTRA_FLAGS
    value: --sslMode=requireSSL --sslPEMKeyFile=/certificates/mongodb-primary.pem --sslClusterFile=/certificates/mongodb-primary.pem --sslCAFile=/certificates/mongoCA.crt
kwill4026 commented 4 years ago

@dani8art I'm still getting the same error even with this option. Honestly I do not see how this would even work with the helm deployment in regards to the pem certs with the way you have specified with me using "mongodb-primary.pem", "mongodb-secondary.pem" etc. With the docker-compose setup you have the ability to setup separate containers, separate volumes, with separate certs. That works fine.

services:
  mongo-1:
    image: 'bitnami/mongodb:4.2.0'
    container_name: mongo-1
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongo-1
      - MONGODB_REPLICA_SET_MODE=primary
      - MONGODB_ROOT_PASSWORD=root
      - MONGODB_REPLICA_SET_KEY=key12345
      - MONGODB_EXTRA_FLAGS=--tlsMode=requireTLS --tlsCertificateKeyFile=/certs/mongodb-primary.pem --tlsCAFile=/certs/ca.crt
      - MONGODB_CLIENT_EXTRA_FLAGS=--tls --tlsCertificateKeyFile=/certs/mongodb-primary.pem --tlsCAFile=/certs/ca.crt
    volumes:
      - './certs/mongo-1.pem:/certs/mongodb-primary.pem'
      - './certs/ca.crt:/certs/ca.crt'

  mongo-2:
    image: 'bitnami/mongodb:4.2.0'
    container_name: mongo-2
    depends_on:
      - mongo-1
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongo-2
      - MONGODB_REPLICA_SET_MODE=secondary
      - MONGODB_PRIMARY_HOST=mongo-1
      - MONGODB_PRIMARY_ROOT_PASSWORD=root
      - MONGODB_REPLICA_SET_KEY=key12345
      - MONGODB_EXTRA_FLAGS=--tlsMode=requireTLS --tlsCertificateKeyFile=/certs/mongodb-secondary.pem --tlsCAFile=/certs/ca.crt
      - MONGODB_CLIENT_EXTRA_FLAGS=--tls --tlsCertificateKeyFile=/certs/mongodb-secondary.pem --tlsCAFile=/certs/ca.crt
    volumes:
      - './certs/mongo-2.pem:/certs/mongodb-secondary.pem'
      - './certs/ca.crt:/certs/ca.crt'

Whereas with the helm deployment I cannot specify --tlsCertificateKeyFile=/certificates/mongodb-primary.pem if the deployment is going to spin up 3 replicas for instance. The tlsCertificateKeyFile can only be specified via configMap, via extraFlags, or via extraEnvVars, in which it would be impossible to use more than one file name (mongodb-primary.pem, mongodb-secondary.pem. It can only take a single pem file. With my initScript and initContainer setup I am able to get the cert creation script to create a mongodb.pem into /certificates directory in each replica/secondary container. It produces the same file name of mongodb.pem with each corresponding to that particular containers CN, in this case the hostname. However when I specify the TLS settings in the extraFlags, the pods crash. I'm not sure if its a 4.2 tag issue or what. I tried to use an older tag (3.6, 4.0) but a file seems to be missing and will cause the pods to not run.

dani8art commented 4 years ago

Hi @kwill4026 , sorry maybe I didn't explain well be I don't say we should do the same but use the same approach, I mean, envvars instead of extraFlags, --sslPEMKeyFile instead of --tlsCertificateKeyFile and so on.

seguidor777 commented 4 years ago

Hi @dani8art ,

I just tried with the options that you pointed, including the options for the client, my envvars are as follows:

extraEnvVars:
  - name: MONGODB_EXTRA_FLAGS
    value: --sslMode=requireTLS --sslPEMKeyFile=/certificates/mongodb-combined.pem --sslClusterFile=/certificates/mongodb-combined.pem --sslCAFile=/certificates/ca.pem
  - name: MONGODB_CLIENT_EXTRA_FLAGS
    value: --tls --sslPEMKeyFile=/certificates/mongodb-combined.pem --sslCAFile=/certificates/ca.pem

But the server are not running properly, because there is a connection error from the client

2020-08-24T02:54:27.686+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:50414 #8 (4 connections now open)
2020-08-24T02:54:27.686+0000 I  NETWORK  [conn8] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 127.0.0.1:50414 (connection id: 8)
2020-08-24T02:54:27.686+0000 I  NETWORK  [conn8] end connection 127.0.0.1:50414 (3 connections now open)

First I had tried without the client extra flags, but I got an error that's why I added those options later

kwill4026 commented 4 years ago

@seguidor777 @dani8art does not work for me either. Can we get someone to help work on the TLS part or at least test it out on their end to see the issues we are seeing so we can collectively figure this out?

franklin432 commented 4 years ago

@dani8art I also tried creating the CA cert file then the server cert with a wildcard Common Name that matches the domain name of all of the replica set members. I created both to be consumed as secrets and passed it to a volume mount I created as an initContainer named "certificates" that gets created upon a helm install. I first perform a helm install on a default bitnami/mongodb chart which loads the pods, then I add my initContainer functions which creates a volume mount named certificates which copies the cert secrets into this volume mount. When I specify

extraEnvVars:
  - name: MONGODB_EXTRA_FLAGS
    value: --tlsMode=preferTLS --tlsCertificateKeyFile=/certificates/mongo.pem --tlsCAFile=/certificates/ca.crt
  - name: MONGODB_CLIENT_EXTRA_FLAGS
    value: --tls --tlsCertificateKeyFile=/certificates/mongo.pem --tlsCAFile=/certificates/ca.crt

and perform a helm upgrade of the deployment, the pods are able to run and in the logs I can see this: 2020-08-26T16:00:06.107+0000 I NETWORK [conn5] SSL mode is set to 'preferred' and connection 5 to 127.0.0.1:33598 is not using SSL. Despite that notice, when I bash into the container I am able to connect simply with username/pass combination as well as with username/passsword AND the TLS settings (pem and ca files). This makes sense since the tlsMode is set to prefer. The problem occurs when I set the tlsMode to requireTLS. Typically when you perform an upgrade of an existing deployment, the secondaries get upgrade first then the primary last. In this case, with the tlsMode set to require, after I performed an helm upgrade it shows the secondary being upgrade but fails on the primary. The primary just shows the previous deployment configuration meaning the upgrade did not affect the primary only the secondary. The secondary shows as RUNNING however the ready status shows 0/1 instead of 1/1 and the logs read:

2020-08-26T16:30:39.736+0000 I  NETWORK  [conn650] end connection 100.96.1.20:53630 (0 connections now open)
2020-08-26T16:30:39.738+0000 I  NETWORK  [listener] connection accepted from 100.96.1.20:53632 #651 (1 connection now open)
2020-08-26T16:30:39.738+0000 I  NETWORK  [conn651] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 100.96.1.20:53632 (connection id: 651)
marcosbc commented 4 years ago

Hi, could you clarify if the initialization succeeded with the SSL options? We'll try to reproduce this issue on our side and get back to you once we have more information.

franklin432 commented 4 years ago

@marcosbc It seems it succeeded only when I set the tlsMode to "preferTLS" and am using wildcard certs. However once I set it to "requireTLS" that's when I see the same error that @seguidor777 noted above and the pods are unable to run successfully. Clearly when set to preferTLS its able to install and run the deployment fine because its not dependent on those TLS settings, its just another option for the server to either run with it or not. However when set to requireTLS the server has to run with the TLS settings provided now as well as start the secondaries and primary with the same TLS settings. Then the client has to connect with the TLS settings as well.

seguidor777 commented 4 years ago

Regarding the error we got, it seems that the client is not being configured with the TLS settings, because the server is configured to only allow SSL connections and the client isn't, it's making a plain request

marcosbc commented 4 years ago

Hi, after checking this, I would say the issue is related to the readiness probes. Could you try to test this change locally to see if there is any other issue we haven't found on our side?

diff --git a/bitnami/mongodb/templates/replicaset/statefulset.yaml b/bitnami/mongodb/templates/replicaset/statefulset.yaml
index 430df1715..d370c5d8b 100644
--- a/bitnami/mongodb/templates/replicaset/statefulset.yaml
+++ b/bitnami/mongodb/templates/replicaset/statefulset.yaml
@@ -238,9 +238,9 @@ spec:
           readinessProbe:
             exec:
               command:
-                - mongo
-                - --eval
-                - "db.adminCommand('ping')"
+                - bash
+                - -ec
+                - mongo $MONGODB_CLIENT_EXTRA_FLAGS --eval "db.adminCommand('ping')"
             initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
             periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
             timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
diff --git a/bitnami/mongodb/templates/standalone/dep-sts.yaml b/bitnami/mongodb/templates/standalone/dep-sts.yaml
index 292b5a54b..9bcdbef23 100644
--- a/bitnami/mongodb/templates/standalone/dep-sts.yaml
+++ b/bitnami/mongodb/templates/standalone/dep-sts.yaml
@@ -184,9 +184,9 @@ spec:
           readinessProbe:
             exec:
               command:
-                - mongo
-                - --eval
-                - "db.adminCommand('ping')"
+                - bash
+                - -ec
+                - mongo $MONGODB_CLIENT_EXTRA_FLAGS --eval "db.adminCommand('ping')"
             initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
             periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
             timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}

It takes advantage of Bash expansion to add any CLI option specified in the environment variable (as long as it is not quoted). If the environment variable is not defined, it would expand to an empty string anyways so it should work.

If it works, feel free to send a PR. We'd be glad to review it!

seguidor777 commented 4 years ago

Hi @marcosbc,

How can I override the default readinessProbe command? I added this value but it didn't work

customReadinessProbe:
exec:
  command:
  - bash
  - -ec
  - mongo $MONGODB_CLIENT_EXTRA_FLAGS --eval "db.adminCommand('ping')"
dani8art commented 4 years ago

Hi @seguidor777 ,

Please try to apply this diff

diff --git a/bitnami/mongodb/values.yaml b/bitnami/mongodb/values.yaml
index f4663db4f..4a05b3aba 100644
--- a/bitnami/mongodb/values.yaml
+++ b/bitnami/mongodb/values.yaml
@@ -298,14 +298,14 @@ resources:
 ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
 ##
 livenessProbe:
-  enabled: true
+  enabled: false
   initialDelaySeconds: 30
   periodSeconds: 10
   timeoutSeconds: 5
   failureThreshold: 6
   successThreshold: 1
 readinessProbe:
-  enabled: true
+  enabled: false
   initialDelaySeconds: 5
   periodSeconds: 10
   timeoutSeconds: 5
@@ -314,11 +314,30 @@ readinessProbe:

 ## Custom Liveness probes for MongoDB pods
 ##
-customLivenessProbe: {}
-
+customLivenessProbe: |
+  exec:
+    command:
+    - bash
+    - -ec
+    - mongo $MONGODB_CLIENT_EXTRA_FLAGS --eval "db.adminCommand('ping')"
+  initialDelaySeconds: 30
+  periodSeconds: 10
+  timeoutSeconds: 5
+  failureThreshold: 6
+  successThreshold: 1
 ## Custom Rediness probes MongoDB pods
 ##
-customReadinessProbe: {}
+customReadinessProbe: |
+  exec:
+    command:
+    - bash
+    - -ec
+    - mongo $MONGODB_CLIENT_EXTRA_FLAGS --eval "db.adminCommand('ping')"
+  initialDelaySeconds: 5
+  periodSeconds: 10
+  timeoutSeconds: 5
+  failureThreshold: 6
+  successThreshold: 1

 ## Add init containers to the MongoDB pods.
 ## Example:
seguidor777 commented 4 years ago

Thanks,

I think the SSL connection error is being produced from another place, the liveness/readiness probes are now passing but the error still persists in the pods

2020-09-01T23:39:35.955+0000 I NETWORK [conn369] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 10.244.1.28:43097 (connection id: 369)

dani8art commented 4 years ago

Hi @seguidor777, this last error is persistent along the time or it just appeared once? it could be possible that during the upgrade you could see some error because non all the nodes have been set up properly but in the end the cluster should arrive at a stable situation as mentioned above

Typically when you perform an upgrade of an existing deployment, the secondaries get upgrade first then the primary last.

could you confirm this?

seguidor777 commented 4 years ago

The error keeps showing up forever. I think it's related to some kind of node inter-service communication I have set up 2 replicas (mongodb-0, mongodb-1) and they have addresses 10.244.1.16 and 10.244.2.18 respectively

I don't know where the log addresses come from, the only client that is connecting successfully is the one that connects to the mongodb-0 replica from localhost using the server certificate

Logs from mongodb-0:

2020-09-02T16:16:51.746+0000 I NETWORK [listener] connection accepted from 10.244.1.17:35139 #3036 (7 connections now open) 2020-09-02T16:16:51.748+0000 I NETWORK [conn3036] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 10.244.1.17:35139 (connection id: 3036) 2020-09-02T16:16:51.748+0000 I NETWORK [conn3036] end connection 10.244.1.17:35139 (6 connections now open) 2020-09-02T16:16:51.802+0000 I NETWORK [listener] connection accepted from 10.244.2.19:38443 #3037 (7 connections now open) 2020-09-02T16:16:51.823+0000 I NETWORK [conn3038] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 10.244.2.19:39589 (connection id: 3038) 2020-09-02T16:16:51.823+0000 I NETWORK [conn3038] end connection 10.244.2.19:39589 (6 connections now open) 2020-09-02T16:16:51.888+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35408 #3039 (7 connections now open) 2020-09-02T16:16:51.894+0000 W NETWORK [conn3039] Client connecting with server's own TLS certificate 2020-09-02T16:16:51.894+0000 I NETWORK [conn3039] received client metadata from 127.0.0.1:35408 conn3039: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.8" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.5.0-kali2-amd64" } } 2020-09-02T16:16:51.898+0000 I NETWORK [conn3039] end connection 127.0.0.1:35408 (6 connections now open)

Logs from mongodb-1:

2020-09-02T16:17:03.796+0000 I NETWORK [listener] connection accepted from 10.244.1.17:47627 #3136 (3 connections now open) 2020-09-02T16:17:03.796+0000 I NETWORK [conn3136] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 10.244.1.17:47627 (connection id: 3136) 2020-09-02T16:17:03.797+0000 I NETWORK [conn3136] end connection 10.244.1.17:47627 (2 connections now open) 2020-09-02T16:17:03.900+0000 I NETWORK [listener] connection accepted from 10.244.2.19:59455 #3138 (3 connections now open) 2020-09-02T16:17:04.041+0000 I NETWORK [conn3139] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 10.244.2.19:38177 (connection id: 3139) 2020-09-02T16:17:04.041+0000 I NETWORK [conn3139] end connection 10.244.2.19:38177 (2 connections now open) 2020-09-02T16:17:04.247+0000 I NETWORK [listener] connection accepted from 10.244.2.20:45237 #3140 (3 connections now open) 2020-09-02T16:17:04.248+0000 I NETWORK [conn3140] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 10.244.2.20:45237 (connection id: 3140) 2020-09-02T16:17:04.248+0000 I NETWORK [conn3140] end connection 10.244.2.20:45237 (2 connections now open) 2020-09-02T16:17:04.290+0000 I NETWORK [listener] connection accepted from 10.244.2.20:41425 #3141 (3 connections now open) 2020-09-02T16:17:04.577+0000 W NETWORK [conn3143] Client connecting with server's own TLS certificate 2020-09-02T16:17:04.577+0000 I NETWORK [conn3143] received client metadata from 127.0.0.1:35680 conn3143: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.8" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.5.0-kali2-amd64" } }

franklin432 commented 4 years ago

I have yet to apply the diff that @dani8art mentioned above, I will in a few. I did notice I have to helm install a default mongo deployment first then helm upgrade with the updated settings of my choice. If you simply helm install with your changes the pod will run but show a restart of 1 and complain about

2020-09-02T16:40:42.048+0000 I  ACCESS   [conn35] SASL SCRAM-SHA-1 authentication failed for root on admin from client 100.96.2.9:45960 ; UserNotFound: Could not find user "root" for db "admin"
2020-09-02T16:40:42.048+0000 I  NETWORK  [conn35] end connection 100.96.2.9:45960 (0 connections now open)

For now it seems to be working after the base install then a helm upgrade then simply set both the livenessProbe and ReadinessProbe to false in the values-production.yaml. I did not use any customReadinessProbe or livenessProbe parameters either. Then I have the following set as well for the ssl/tls:

extraEnvVars:
  - name: MONGODB_EXTRA_FLAGS
    value: --tlsMode=requireTLS --tlsCertificateKeyFile=/pemdir/server.pem --tlsCAFile=/pemdir/ca.crt --tlsClusterFile=/pemdir/server.pem
  - name: MONGODB_CLIENT_EXTRA_FLAGS
    value: --tls --tlsCertificateKeyFile=/pemdir/server.pem --tlsCAFile=/pemdir/ca.crt

If I attempt to login without the tls settings with mongo -u root -p the logs read:

2020-09-02T16:32:19.292+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:47636 #11 (7 connections now open)
2020-09-02T16:32:19.292+0000 I  NETWORK  [conn11] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 127.0.0.1:47636 (connection id: 11)
2020-09-02T16:32:19.293+0000 I  NETWORK  [conn11] end connection 127.0.0.1:47636 (6 connections now open)

Then when I login with the correct tls settings mongo -u root -p --tls --tlsCertificateKeyFile=/pemdir/server.pem --tlsCAFile=/pemdir/ca.crt --host $MONGODB_ADVERTISED_HOSTNAME The logs then read:

2020-09-02T16:34:24.572+0000 I  NETWORK  [listener] connection accepted from 100.96.1.7:49208 #12 (7 connections now open)
2020-09-02T16:34:24.577+0000 W  NETWORK  [conn12] Client connecting with server's own TLS certificate
2020-09-02T16:34:24.577+0000 I  NETWORK  [conn12] received client metadata from 100.96.1.7:49208 conn12: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.9" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 4.9.0-13-amd64" } }
2020-09-02T16:34:24.605+0000 I  ACCESS   [conn12] Successfully authenticated as principal root on admin from client 100.96.1.7:49208
2020-09-02T16:34:26.270+0000 I  NETWORK  [conn12] end connection 100.96.1.7:49208 (6 connections now open)

FYI im using 4.2.9-debian-10-r5 and also created wildcard certs (CN=*.test-mongodb-headless.default.svc.cluster.local) for my server.pem. My ca.crt and server.pem where created before hand in which I made a secret out of them named mongo-ssl. A volume was created from that secret and a volumeMount was created named pemdir which contains the certs.

volume:
  - name: mongo-ssl-volume
          secret:
            secretName: mongo-ssl
            defaultMode: 256
volumeMounts: 
  - name: mongo-ssl-volume
              readOnly: true
              mountPath: /pemdir
seguidor777 commented 4 years ago

So the error we are dealing with has to do when a client connects without the TLS settings. In my case, it seems that the variable $MONGODB_CLIENT_EXTRA_FLAGS is not being used correctly in any place

franklin432 commented 4 years ago

@seguidor777 hmm how and where are you setting the mongodb client extra flags? If you have tlsMode or sslMode set to require and a client connects without the TLS settings then you SHOULD indeed be seeing 2020-09-02T16:16:51.748+0000 I NETWORK [conn3036] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 10.244.1.17:35139 (connection id: 3036) because the server has been configured only to allow TLS and your clients connection to the server fails because it lacks this. However if the client connects with the correct TLS settings then you would of course see Client connecting with server's own TLS certificate, received client metadata from 100.x.x.x, Successfully authenticated as principal root on admin from client

seguidor777 commented 4 years ago

You are right @franklin432, the error is present when the server requires the TLS but the client doesn't present it

I added the TLS settings in the extraEnvVars from my values.yml

extraEnvVars:
  - name: MONGODB_EXTRA_FLAGS
    value: --tlsMode=requireTLS --tlsCertificateKeyFile=/certificates/mongodb-combined.pem --tlsClusterFile=/certificates/mongodb-combined.pem --tlsCAFile=/etc/certs/ca-cert/ca.pem
  - name: MONGODB_CLIENT_EXTRA_FLAGS
    value: --tls --tlsCertificateKeyFile=/certificates/mongodb-combined.pem --tlsCAFile=/etc/certs/ca-cert/ca.pem

And the certificates are mounted with

extraVolumes:
  - name: "mongodb-certs"
    secret:
      secretName: "mongodb-certs"
      defaultMode: 0400
  - name: "ca-cert"
    secret:
      secretName: "ca-cert"
      defaultMode: 0400
extraVolumeMounts:
  - name: "mongodb-certs"
    mountPath: "/certificates"
    readOnly: true
  - name: "ca-cert"
    mountPath: "/etc/certs/ca-cert"
    readOnly: true
franklin432 commented 4 years ago

@seguidor777 cool and is this way working for you or are you still getting the same error? Also what mongo version are you using

seguidor777 commented 4 years ago

I get it working only when I change to --tlsMode=allowTLS, but I am trying to make it work with the requireTLS value The mongo version is 4.2.8

pixie79 commented 4 years ago

I have a similar setup, I created my certs using SNI and Cert manager, then combine them in an initpod.

I have managed to get it working with requireTLS and health checks on but I have found an issue when metrics are enabled. When using custom healthchecks you do need to set the standard ones to disabled (otherwise the custom versions do not override the original).

As for the metrics pods it appears the command there needs an override as well to use TLS to connect to the cluster. Doing a get on the replicaset yaml i can see the args used to start the montoring agent does not specify TLS hence the error. Is there an env variable to override this? I do see there is an extraFlags: "" setting in the metric bit but I am not sure what would be correct.

Current command from the deployed yaml for the metrics pod

dani8art commented 4 years ago

Hi,

Yes, it seems the errors could be caused by the metrics sidecar, do you have it enabled @seguidor777?

In order to solve it @pixie79 you should use the following and add the proper flags given this docs

metrics:
  extraFlags: <tls_flags>
pixie79 commented 4 years ago

arh - it looks like flags might no longer work and instead it wants to use options?

Is there a way to set the options instead?

kubectl logs mongodb-2 metrics -f mongodb_exporter: error: unknown long flag '--mongodb.tls', try --help

kubectl logs mongodb-2 metrics -f mongodb_exporter: error: unknown short flag '-m', try --help

^ was with > extraFlags: "--mongodb.tls --mongodb.tls-ca=/certificates/ca.crt --mongodb.tls-cert=/certificates/mongo.pem"

seguidor777 commented 4 years ago

@dani8art, I don't have the metrics enabled

@pixie79, Could you please share your configuration to see how you got the requireTLS option work? Even though it is working, no SSLHandshakeFailed error is shown on the pods?

dani8art commented 4 years ago

Hi @pixie79 it seems it was recently introduced please check https://github.com/bitnami/charts/pull/3590. With that, you could use https://docs.mongodb.com/manual/reference/connection-string/#tls-options to enable it.

pixie79 commented 4 years ago

These are my current settings:

Certificate request - via cert-manager https://gist.github.com/pixie79/cffae80bc9b0fee43d3f10e495995955

values.yaml https://gist.github.com/pixie79/815fda6c1ecf29cc39b9a6358f691637

dani8art commented 4 years ago

you should modify it using:

--set metrics.enabled=true
--set metrics.extraUri=?ssl=true&tlsCertificateKeyFile=<your_tls>&tlsCAFile=<your_ca>

and removing this extra https://gist.github.com/pixie79/815fda6c1ecf29cc39b9a6358f691637#file-mongo-value-yaml-L858

seguidor777 commented 4 years ago

I made it work with the same dnsNames as @pixie79

Could anyone please let me know, what should be the domains supported by the certificate? Lets suppose that I only have 2 replicas

binboum commented 4 years ago

Here is a value file that will allow you to do TLS :

Later I would make the modifications properly in the templates.

It's research on issues, and a little bit of work.

secret-ca.yaml

{{- $cn := printf "%s-headless.%s.svc.cluster.local" ( include "mystack.mongodb.fullname" . ) .Release.Namespace }}
{{- $ca := genCA "mystack-ca" 3650 -}}
{{- $cert := genSignedCert $cn nil nil 3650 $ca -}}
{{- $pem := printf "%s%s" $cert.Cert $cert.Key -}}
apiVersion: v1
kind: Secret
metadata:
  name: ca-clients
  annotations:
    "helm.sh/hook": "pre-install"
  labels:
    {{- include "mystack.labels" . | nindent 4 }}
type: Opaque
data:
  mongodb-ca-cert: {{ b64enc $ca.Cert }}
  mongodb-ca-key: {{ b64enc $ca.Key }}
  client-pem: {{ b64enc $pem }}

value.yaml

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass
## Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride
#   namespaceOverride: my-global-namespace

image:
  ## Bitnami MongoDB registry
  ##
  registry: docker.io
  ## Bitnami MongoDB image name
  ##
  repository: bitnami/mongodb
  ## Bitnami MongoDB image tag
  ## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
  ##
  tag: 4.2.8-debian-10-r31
  ## Specify a imagePullPolicy
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## It turns on Bitnami debugging in minideb-extras-base
  ## ref:  https://github.com/bitnami/minideb-extras-base
  debug: false

## String to partially override mongodb.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override mongodb.fullname template
##
# fullnameOverride:

## Kubernetes Cluster Domain
##
clusterDomain: cluster.local

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## MongoDB architecture. Allowed values: standalone or replicaset
##
architecture: replicaset

## Use StatefulSet instead of Deployment when deploying standalone
##
useStatefulSet: false

## MongoDB Authentication parameters
##
auth:
  ## bug : https://github.com/bitnami/charts/pull/3544
  ## Enable authentication
  ## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
  ##
  enabled: true
  ## MongoDB root password
  ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
  ##
  rootPassword: ""
  ## MongoDB custom user and database
  ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
  ##
  username: test
  password: ""
  database: test
  ## Key used for replica set authentication
  ## Ignored when mongodb.architecture=standalone
  ##
  replicaSetKey: ""

  ## Existing secret with MongoDB credentials
  ## NOTE: When it's set the previous parameters are ignored.
  ##
  existingSecret: credentials

## Name of the replica set
## Ignored when mongodb.architecture=standalone
##
replicaSetName: rs0

## Enable DNS hostnames in the replica set config
## Ignored when mongodb.architecture=standalone
## Ignored when externalAccess.enabled=true
##
replicaSetHostnames: true

## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
enableIPv6: false

## Whether enable/disable DirectoryPerDB on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
##
directoryPerDB: false

## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
systemLogVerbosity: 0
disableSystemLog: false

## MongoDB configuration file for Primary and Secondary nodes. For documentation of all options, see:
##   http://docs.mongodb.org/manual/reference/configuration-options/
## Example:
## configuration:
##   # where and how to store data.
##   storage:
##     dbPath: /bitnami/mongodb/data/db
##     journal:
##       enabled: true
##     directoryPerDB: false
##   # where to write logging data
##   systemLog:
##     destination: file
##     quiet: false
##     logAppend: true
##     logRotate: reopen
##     path: /opt/bitnami/mongodb/logs/mongodb.log
##     verbosity: 0
##   # network interfaces
##   net:
##     port: 27017
##     unixDomainSocket:
##       enabled: true
##       pathPrefix: /opt/bitnami/mongodb/tmp
##     ipv6: false
##     bindIpAll: true
##   # replica set options
##   #replication:
##     #replSetName: replicaset
##     #enableMajorityReadConcern: true
##   # process management options
##   processManagement:
##      fork: false
##      pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
##   # set parameter options
##   setParameter:
##      enableLocalhostAuthBypass: true
##   # security options
##   security:
##     authorization: disabled
##     #keyFile: /opt/bitnami/mongodb/conf/keyfile
##
configuration: ""

## ConfigMap with MongoDB configuration for Primary and Secondary nodes
## NOTE: When it's set the arbiter.configuration parameter is ignored
##
# existingConfigmap:

## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Example:
## initdbScripts:
##   my_init_script.sh: |
##      #!/bin/bash
##      echo "Do something."
initdbScripts: {}

## Existing ConfigMap with custom init scripts
##
# initdbScriptsConfigMap:

## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:

## Additional command line flags
## Example:
## extraFlags:
##  - "--wiredTigerCacheSizeGB=2"
##
extraFlags:
  - --wiredTigerCacheSizeGB=1
  - --tlsMode=requireTLS
  - --tlsCAFile=/certs/mongodb-ca-cert
  - --tlsCertificateKeyFile=/certs/mongodb.pem

## Additional environment variables to set
## E.g:
## extraEnvVars:
##   - name: FOO
##     value: BAR
##
extraEnvVars:
  - name: MONGODB_CLIENT_EXTRA_FLAGS
    value: --tls --tlsCertificateKeyFile=/certs/mongodb.pem --tlsCAFile=/certs/mongodb-ca-cert

## ConfigMap with extra environment variables
##
# extraEnvVarsCM:

## Secret with extra environment variables
##
# extraEnvVarsSecret:

## Annotations to be added to the MongoDB statefulset. Evaluated as a template.
##
annotations: {}

## Additional labels to be added to the MongoDB statefulset. Evaluated as a template.
##
labels: {}

## Number of MongoDB replicas to deploy.
## Ignored when mongodb.architecture=standalone
##
replicaCount: 3

## StrategyType for MongoDB statefulset
## It can be set to RollingUpdate or Recreate by default.
##
strategyType: RollingUpdate

## MongoDB should be initialized one by one when building the replicaset for the first time.
##
podManagementPolicy: OrderedReady

## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
    - podAffinityTerm:
        labelSelector:
          matchLabels:
            stack: test
        topologyKey: failure-domain.beta.kubernetes.io/zone
      weight: 90
    - podAffinityTerm:
        labelSelector:
          matchLabels:
            stack: test
        topologyKey: kubernetes.io/hostname
      weight: 100

## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}

## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## Lables for MongoDB pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels:
  stack: test

## Annotations for MongoDB pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations:
  sidecar.istio.io/inject: "false"

## MongoDB pods' priority.
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""

## MongoDB pods' Security Context.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
  enabled: true
  fsGroup: 1001
  ## sysctl settings
  ## Example:
  ## sysctls:
  ## - name: net.core.somaxconn
  ##   value: "10000"
  ##
  sysctls: {}

## MongoDB containers' Security Context (only main container).
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
containerSecurityContext:
  enabled: true
  runAsUser: 1001

## MongoDB containers' resource requests and limits.
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    cpu: 300m
    memory: 2048Mi
  requests:
    cpu: 100m
    memory: 1536Mi

## MongoDB pods' liveness and readiness probes. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
readinessProbe:
  enabled: false
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

## Custom Liveness probes for MongoDB pods
##
customLivenessProbe: {}

## Custom Rediness probes MongoDB pods
##
customReadinessProbe:
  exec:
    command:
      - mongo 
      - --tls 
      - --tlsCertificateKeyFile=/certs/mongodb.pem 
      - --tlsCAFile=/certs/mongodb-ca-cert
      - --eval
      - "db.adminCommand('ping')"
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 6

## Add init containers to the MongoDB pods.
## Example:
## initContainers:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
initContainers:
  - name: generate-client
    image: nginx:1.19.1
    imagePullPolicy: "Always"
    env:
      - name: MY_POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
    volumeMounts:
    - name: certs-volume
      mountPath: /certs/CAs
    - name: certs
      mountPath: /certs
    command: 
      - sh
      - "-c"
      - |
        /bin/bash <<'EOF'

        my_hostname=$(hostname)
        svc=$(echo -n "$my_hostname" | sed s/-[0-9]*$//)-headless

        cp /certs/CAs/* /certs/

        cat >/certs/openssl.cnf <<EOL
        [req]
        req_extensions = v3_req
        distinguished_name = req_distinguished_name
        [req_distinguished_name]
        [ v3_req ]
        basicConstraints = CA:FALSE
        keyUsage = nonRepudiation, digitalSignature, keyEncipherment
        subjectAltName = @alt_names
        [alt_names]
        DNS.1 = $svc
        DNS.2 = $my_hostname
        DNS.3 = $my_hostname.$svc.$MY_POD_NAMESPACE.svc.cluster.local
        DNS.4 = localhost
        DNS.5 = 127.0.0.1
        EOL

        export RANDFILE=/certs/.rnd && openssl genrsa -out /certs/mongo.key 2048

        #Create the client/server cert
        openssl req -new -key /certs/mongo.key -out /certs/mongo.csr -subj "/C=US/O=My Organisations/OU=IT/CN=$my_hostname" -config /certs/openssl.cnf

        #Signing the server cert with the CA cert and key
        openssl x509 -req -in /certs/mongo.csr -CA /certs/mongodb-ca-cert -CAkey /certs/mongodb-ca-key -CAcreateserial -out /certs/mongo.crt -days 3650 -extensions v3_req -extfile /certs/openssl.cnf

        rm /certs/mongo.csr

        #Concatenate to a pem file for use as the client PEM file which can be used for both member and client authentication.
        cat /certs/mongo.crt /certs/mongo.key > /certs/mongodb.pem

        cd /certs/
        shopt -s extglob
        rm -rf !(mongodb-ca-cert|mongodb.pem|CAs)

        EOF

## Add sidecars to the MongoDB pods.
## Example:
## sidecars:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
sidecars: {}

## extraVolumes and extraVolumeMounts allows you to mount other volumes on MongoDB pods
## Examples:
## extraVolumeMounts:
##   - name: extras
##     mountPath: /usr/share/extras
##     readOnly: true
## extraVolumes:
##   - name: extras
##     emptyDir: {}
extraVolumeMounts:
  - name: certs
    mountPath: /certs
extraVolumes:
  - name: certs
    emptyDir: {}
  - name: certs-volume
    secret:
      secretName: ca-clients
      items:
      - key: mongodb-ca-cert
        path: mongodb-ca-cert
        mode: 511
      - key: mongodb-ca-key
        path: mongodb-ca-key
        mode: 511

## MongoDB Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
  create: true
  ## Min number of pods that must still be available after the eviction
  ##
  minAvailable: 1
  ## Max number of pods that can be unavailable after the eviction
  ##
  # maxUnavailable: 1

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ## Ignored when mongodb.architecture=replicaset
  ##
  # existingClaim:
  ## PV Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ## set, choosing the default provisioner.
  ##
  storageClass: ""
  ## PV Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## PVC size
  ##
  size: 50Gi
  ## PVC annotations
  ##
  annotations: {}
  ## The path the volume will be mounted at, useful when using different
  ## MongoDB images.
  ##
  mountPath: /bitnami/mongodb
  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

## Service parameters
##
service:
  ## Service type
  ##
  type: ClusterIP
  ## MongoDB service port
  ##
  port: 27017
  ## MongoDB service port name
  ##
  portName: mongodb
  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePort: ""
  ## MongoDB service clusterIP IP
  ##
  # clusterIP: None
  ## Specify the externalIP value ClusterIP service type.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
  ##
  externalIPs: []
  ## Specify the loadBalancerIP value for LoadBalancer service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
  ##
  # loadBalancerIP:
  ## Specify the loadBalancerSourceRanges value for LoadBalancer service types.
  ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ##
  loadBalancerSourceRanges: []
  ## Provide any additional annotations which may be required. Evaluated as a template
  ##
  annotations: {}

## External Access to MongoDB nodes configuration
##
externalAccess:
  ## Enable Kubernetes external cluster access to MongoDB nodes
  ##
  enabled: false
  ## External IPs auto-discovery configuration
  ## An init container is used to auto-detect LB IPs or node ports by querying the K8s API
  ## Note: RBAC might be required
  ##
  autoDiscovery:
    ## Enable external IP/ports auto-discovery
    ##
    enabled: false
    ## Bitnami Kubectl image
    ## ref: https://hub.docker.com/r/bitnami/kubectl/tags/
    ##
    image:
      registry: docker.io
      repository: bitnami/kubectl
      tag: 1.18.5-debian-10-r14
      ## Specify a imagePullPolicy
      ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
      ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
      ##
      pullPolicy: IfNotPresent
      ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ## Example:
      ## pullSecrets:
      ##   - myRegistryKeySecretName
      ##
      pullSecrets: []
    ## Init Container resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      limits: {}
      #   cpu: 100m
      #   memory: 128Mi
      requests: {}
      #   cpu: 100m
      #   memory: 128Mi
  ## Parameters to configure K8s service(s) used to externally access Kafka brokers
  ## A new service per broker will be created
  ##
  service:
    ## Service type. Allowed values: LoadBalancer or NodePort
    ##
    type: LoadBalancer
    ## Port used when service type is LoadBalancer
    ##
    port: 27017
    ## Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount
    ## Example:
    ## loadBalancerIPs:
    ##   - X.X.X.X
    ##   - Y.Y.Y.Y
    ##
    loadBalancerIPs: []
    ## Load Balancer sources
    ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ## Example:
    ## loadBalancerSourceRanges:
    ## - 10.10.10.0/24
    ##
    loadBalancerSourceRanges: []
    ## Array of node ports used for each Kafka broker. Length must be the same as replicaCount
    ## Example:
    ## nodePorts:
    ##   - 30001
    ##   - 30002
    ##
    nodePorts: []
    ## When service type is NodePort, you can specify the domain used for Kafka advertised listeners.
    ## If not specified, the container will try to get the kubernetes node external IP
    ##
    # domain: mydomain.com
    ## Provide any additional annotations which may be required. Evaluated as a template
    ##
    annotations: {}

##
## MongoDB Arbiter parameters.
##
arbiter:
  enabled: false
  ## MongoDB configuration file for the Arbiter. For documentation of all options, see:
  ##   http://docs.mongodb.org/manual/reference/configuration-options/
  ##
  configuration: ""

  ## ConfigMap with MongoDB configuration for the Arbiter
  ## NOTE: When it's set the arbiter.configuration parameter is ignored
  ##
  # existingConfigmap:

  ## Command and args for running the container (set to default if not set). Use array form
  ##
  # command:
  # args:

  ## Additional command line flags
  ## Example:
  ## extraFlags:
  ##  - "--wiredTigerCacheSizeGB=2"
  ##
  extraFlags:
    - --wiredTigerCacheSizeGB=0.5
    - --tlsMode=requireTLS
    - --tlsCAFile=/certs/mongodb-ca-cert
    - --tlsCertificateKeyFile=/certs/mongodb.pem

  ## Additional environment variables to set
  ## E.g:
  ## extraEnvVars:
  ##   - name: FOO
  ##     value: BAR
  ##
  extraEnvVars:
    - name: MONGODB_CLIENT_EXTRA_FLAGS
      value: --tls --tlsCertificateKeyFile=/certs/mongodb.pem --tlsCAFile=/certs/mongodb-ca-cert

  ## ConfigMap with extra environment variables
  ##
  # extraEnvVarsCM:

  ## Secret with extra environment variables
  ##
  # extraEnvVarsSecret:

  ## Annotations to be added to the Arbiter statefulset. Evaluated as a template.
  ##
  annotations: {}

  ## Additional to be added to the Arbiter statefulset. Evaluated as a template.
  ##
  labels: {}

  ## Affinity for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              stack: test
          topologyKey: failure-domain.beta.kubernetes.io/zone
        weight: 90
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              stack: test
          topologyKey: kubernetes.io/hostname
        weight: 100

  ## Node labels for pod assignment. Evaluated as a template.
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}

  ## Tolerations for pod assignment. Evaluated as a template.
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []

  ## Lables for MongoDB Arbiter pods. Evaluated as a template.
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels:
    stack: test

  ## Annotations for MongoDB Arbiter pods. Evaluated as a template.
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations:
    sidecar.istio.io/inject: "false"

  ## MongoDB Arbiter pods' priority.
  ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  ##
  # priorityClassName: ""

  ## MongoDB Arbiter pods' Security Context.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ##
  podSecurityContext:
    enabled: true
    fsGroup: 1001
    ## sysctl settings
    ## Example:
    ## sysctls:
    ## - name: net.core.somaxconn
    ##   value: "10000"
    ##
    sysctls: {}

  ## MongoDB Arbiter containers' Security Context (only main container).
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
  ##
  containerSecurityContext:
    enabled: true
    runAsUser: 1001

  ## MongoDB Arbiter containers' resource requests and limits.
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    limits:
      cpu: 100m
      memory: 1024Mi
    requests:
      cpu: 50m
      memory: 512Mi

  ## MongoDB Arbiter pods' liveness and readiness probes. Evaluated as a template.
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1
  readinessProbe:
    enabled: false
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

  ## Custom Rediness probes MongoDB Arbiter pods
  ##
  customLivenessProbe: {}

  ## Custom Rediness probes MongoDB Arbiter pods
  ##
  customReadinessProbe:
    exec:
      command:
        - mongo 
        - --tls 
        - --tlsCertificateKeyFile=/certs/mongodb.pem 
        - --tlsCAFile=/certs/mongodb-ca-cert
        - --eval
        - "db.adminCommand('ping')"
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 6

  ## Add init containers to the MongoDB Arbiter pods.
  ## Example:
  ## initContainers:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
  ##         containerPort: 1234
  ##
  initContainers:
    - name: generate-client
      image: nginx:1.19.1
      imagePullPolicy: "Always"
      env:
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumeMounts:
      - name: certs-volume
        mountPath: /certs/CAs
      - name: certs
        mountPath: /certs
      command: 
        - sh
        - "-c"
        - |
          /bin/bash <<'EOF'

          my_hostname=$(hostname)
          svc=$(echo -n "$my_hostname" | sed s/-[0-9]*$//)-headless

          cp /certs/CAs/* /certs/

          cat >/certs/openssl.cnf <<EOL
          [req]
          req_extensions = v3_req
          distinguished_name = req_distinguished_name
          [req_distinguished_name]
          [ v3_req ]
          basicConstraints = CA:FALSE
          keyUsage = nonRepudiation, digitalSignature, keyEncipherment
          subjectAltName = @alt_names
          [alt_names]
          DNS.1 = $svc
          DNS.2 = $my_hostname
          DNS.3 = $my_hostname.$svc.$MY_POD_NAMESPACE.svc.cluster.local
          DNS.4 = localhost
          DNS.5 = 127.0.0.1
          EOL

          export RANDFILE=/certs/.rnd && openssl genrsa -out /certs/mongo.key 2048

          #Create the client/server cert
          openssl req -new -key /certs/mongo.key -out /certs/mongo.csr -subj "/C=US/O=My Organisations/OU=IT/CN=$my_hostname" -config /certs/openssl.cnf

          #Signing the server cert with the CA cert and key
          openssl x509 -req -in /certs/mongo.csr -CA /certs/mongodb-ca-cert -CAkey /certs/mongodb-ca-key -CAcreateserial -out /certs/mongo.crt -days 3650 -extensions v3_req -extfile /certs/openssl.cnf

          rm /certs/mongo.csr

          #Concatenate to a pem file for use as the client PEM file which can be used for both member and client authentication.
          cat /certs/mongo.crt /certs/mongo.key > /certs/mongodb.pem

          cd /certs/
          shopt -s extglob
          rm -rf !(mongodb-ca-cert|mongodb.pem|CAs)

          EOF

  ## Add sidecars to the MongoDB Arbiter pods.
  ## Example:
  ## sidecars:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
  ##         containerPort: 1234
  ##
  sidecars: {}

  ## extraVolumes and extraVolumeMounts allows you to mount other volumes on MongoDB Arbiter pods
  ## Examples:
  ## extraVolumeMounts:
  ##   - name: extras
  ##     mountPath: /usr/share/extras
  ##     readOnly: true
  ## extraVolumes:
  ##   - name: extras
  ##     emptyDir: {}
  extraVolumeMounts:
    - name: certs
      mountPath: /certs
  extraVolumes:
    - name: certs
      emptyDir: {}
    - name: certs-volume
      secret:
        secretName: ca-clients
        items:
        - key: mongodb-ca-cert
          path: mongodb-ca-cert
          mode: 511
        - key: mongodb-ca-key
          path: mongodb-ca-key
          mode: 511

  ## MongoDB Arbiter Pod Disruption Budget configuration
  ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
  ##
  pdb:
    create: false
    ## Min number of pods that must still be available after the eviction
    ##
    minAvailable: 1
    ## Max number of pods that can be unavailable after the eviction
    ##
    # maxUnavailable: 1

## ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the rabbitmq.fullname template
  ##
  # name:

## Role Based Access
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
  ## Specifies whether RBAC rules should be created
  ## binding Kafka ServiceAccount to a role
  ## that allows Kafka pods querying the K8s API
  ##
  create: false

## Init Container paramaters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
  enabled: false
  ## Bitnami Minideb image
  ## ref: https://hub.docker.com/r/bitnami/minideb/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: buster
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## Example:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## Init Container resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 100m
    #   memory: 128Mi
    requests: {}
    #   cpu: 100m
    #   memory: 128Mi

## Prometheus Exporter / Metrics
##
metrics:
  enabled: false
  ## Bitnami MongoDB Promtheus Exporter image
  ## ref: https://hub.docker.com/r/bitnami/mongodb-exporter/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/mongodb-exporter
    tag: 0.11.0-debian-10-r80
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName

  ## String with extra flags to the metrics exporter
  ## ref: https://github.com/percona/mongodb_exporter/blob/master/mongodb_exporter.go
  ##
  extraFlags: ""

  ## Metrics exporter container resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 100m
    #   memory: 128Mi
    requests: {}
    #   cpu: 100m
    #   memory: 128Mi

  ## Prometheus Exporter service configuration
  ##
  service:
    ## Annotations for Prometheus Exporter pods. Evaluated as a template.
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "{{ .Values.metrics.service.port }}"
      prometheus.io/path: "/metrics"
    type: ClusterIP
    port: 9216

  ## Metrics exporter liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 15
    periodSeconds: 5
    timeoutSeconds: 5
    failureThreshold: 3
    successThreshold: 1
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
  ##
  serviceMonitor:
    ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
    enabled: false

    ## Specify the namespace where Prometheus Operator is running
    ##
    # namespace: monitoring

    ## Specify the interval at which metrics should be scraped
    ##
    interval: 30s
    ## Specify the timeout after which the scrape is ended
    ##
    # scrapeTimeout: 30s
    ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    ##
    additionalLabels: {}

  ## Custom PrometheusRule to be defined
  ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
  ##
  prometheusRule:
    enabled: false
    additionalLabels: {}
    ## Specify the namespace where Prometheus Operator is running
    ##
    # namespace: monitoring
    ## Define individual alerting rules as required
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
    ##      https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
    ##
    rules: {}
mnmami commented 4 years ago

@binboum would you also add some docs regarding this? it'd be a great addition to this loved chart. Looking forward to the updated templates.

dani8art commented 4 years ago

Hi @binboum thank you so much for this contribution, we'll be happy to see and handle it as a PR.

sanguis commented 4 years ago

@dani8art I really need this and would be happy to format this as a PR if @binboum does not get to it but I dont want to take the green dot credit.

Giving him 24 hours then I will be submitting.

binboum commented 4 years ago

Hi, if you have the time you can use the code to transform it as a template.

I have no problem with that, the code as is works.

franklin432 commented 4 years ago

I was able to get @binboum code to work. Of course I had to make a few minor changes to his secret-ca.yaml file to make it reference my deployment since his was referencing "mystack". Also I made some changes to his values.yaml file to have it working with my deployment. For instance, I had to comment out the existingSecret: credentials, the affinity: section, the podLabels: stack: test, the podAnnotations: sidecar.istio.io/inject: "false".

From what I understand from @binboum code in the secret-ca.yaml file, he is utilizing the helm "pre-install" hook to ensure that the certs will only be generated on chart install. This generates the Certificate Authority, the CA cert and CA key before any other charts get loaded and this is installed as a secret. The initContainer sets up the rest.

I was also able to test @pixie79 TLS setup option in which he utilizes Cert-manager. That seems to work as well. I am not sure which method is best but so far both seem to work. I have not got around to testing both options with the metrics set to true though. I will look into that

dani8art commented 4 years ago

Hi @sanguis we will be happy to handle it, and also @binboum and @pixie79 we would like you to review it as well. Thank you all for the contributions and the effort!! Looking forward to the PR!

guillaumelachaud commented 4 years ago

How does the proposed solution work with external access ?

It seems that enabling internal access disables replicaSetHostnames. Which in turn, means nodes can only talk to each other through their external IP address. Addresses which are not part of the SAN of the nodes certificates generated by the init container.

Any idea on how to bypass this limitation ?

Thanks!

dani8art commented 4 years ago

Hi @GuillaumeLachaud we would like to see the proposed PR first and then we can discuss about this in there too. @sanguis would you still like to contribute with these changes? did you have the chance to work on it?

franklin432 commented 4 years ago

any update on this proposed PR? @sanguis @dani8art

dani8art commented 4 years ago

Hi @sanguis would you still like to contribute with these changes? did you have the chance to work on it?

If working on this is not going to be possible for you, we could handle it internally or maybe another colleague here would like to work on it.

Unfortunately, If we finally would have to handle it, regarding priorities and internal milestones and due to there is a way to add tls to the chart, we would not be able to give you an ETA.

mnmami commented 4 years ago

@dani8art , I'm working on it at the moment, but I shouldn't promise anything in a short timespan (first experience with Helm chart dev.) as this makes only a portion of my work. If I do get it done though, I'll be very happy to contribute to this nice and unique-of-its-kind chart.

Just allow me to be honest, for a production enterprise deployment, the current version without TLS data encryption is just not usable (it's simply my case). So I'd strongly suggest that you reshuffle your priorities and move the TLS/SSL support among the top ones.

gbarrett1988 commented 4 years ago

It may be beneficial to support custom CNAMEs with TLS enabled as well. This seems necessary when using a production CA. Seems the Mongo team had to do this with their kubernetes implementation as well see here: https://docs.mongodb.com/kubernetes-operator/master/reference/k8s-operator-specification/#spec.connectivity.replicaSetHorizons

franklin432 commented 4 years ago

@dani8art I can try to contribute to these changes. I'm new to contributing changes so I'll have to look into the format/process.

gbarrett1988 commented 4 years ago

I would also like to see a quick turnaround time on this but as a dev I understand what it means to have priorities and pleasing everyone is never possible. Still TLS is a pretty big deal for production applications. Perhaps there's another chart or nginx/haproxy config that can appease this for the interim? Any suggestions?

binboum commented 4 years ago

I can make time for a deliverable within 1 month.

mnmami commented 4 years ago

Thanks to the PR made and the input from the contributors, I manged to make enable SSL at server startup and to connect to it internally using mongo shell.

What needs to be adapted now is the external access to MongoDB server having SSL enabled. We can't use the LoadBalancer service names (ending with -external) as MongoDB hosts since those (and the respective IPs) are different from the host names specified in the certificates (CN) at startup.

Any thoughts how can the external access now be enabled?

dani8art commented 4 years ago

We must ensure that we can provide CNs to the startup creation to be appended to the rest of the CNs due to we will previously know the CN that will point to the different IPs. Don't you think so?