percona / percona-helm-charts

Collection of Helm charts for Percona Kubernetes Operators.
https://www.percona.com/software/percona-kubernetes-operators
Other
113 stars 159 forks source link

Setting up the psmdb-db Helm chart similar to how I setup MongoDB replica set #253

Open vinnytwice opened 10 months ago

vinnytwice commented 10 months ago

Hi, as I saw that it's a drop in replacement for it, I'm about to swap MongoDb community for the Percona psmdb-db in my Kubernetes cluster and I need a bit of clarifications about the Chart's values.

  1. I need to specify 2 PVC ( one for data and one for logs) per pod but I see only parameters for one replsets[0].volumeSpec.pvc. How do I add more than one if it's possible?
  2. In the chart's value file I don't see any replsets[0].podSecurityContext, while I do see it for nonvotingand sharded so I guess it's just not present in the chart's Values file but available as a parameter to set.
  3. How to specify a db name? It will be the db name in the driver's connection string..so I 'd need to know what db name to use.
  4. Is backup remote storage possible on GCP for Firebase storage buckets?
  5. Is namespace definable only at chart install or is there a parameter to specify it?

Many thanks. At the moment the manifest for MongoDBCommunity resource is:

apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
  name: {{ .Values.replica_set.name}} #mongo-rs
  namespace: {{ .Values.namespace}} #default
spec:
  members: {{ .Values.replica_set.replicas}} #1
  type: ReplicaSet
  # mongo version
  version: {{ .Values.replica_set.mongo_version}}
  security:
    authentication:
      modes:
        - SCRAM
  users:
    - name: {{ .Values.replica_set.admin_user.name}} #admin-user
      db: {{ .Values.replica_set.db_name}} #fixit
      passwordSecretRef:
        name: {{ .Values.secret.name}} #mongo-secret
      roles:
        - name: {{ .Values.replica_set.admin_user.role_1}} #clusterAdmin
          db: {{ .Values.replica_set.db_name}} #fixit
        - name: {{ .Values.replica_set.admin_user.role_2}} #userAdminAnyDatabase
          db: {{ .Values.replica_set.db_name}}
        - name: {{ .Values.replica_set.admin_user.role_3}} #ReadWriteAnyDatabase
          db: {{ .Values.replica_set.db_name}}   #fixit
      scramCredentialsSecretName: {{ .Values.replica_set.admin_user.scramCredentialsSecretName}} #my-scram-mg-fixit
  additionalMongodConfig:
    storage.wiredTiger.engineConfig.journalCompressor: zlib
  statefulSet:
    spec:
      # You can specify a name for the service object created by the operator, by default it generates one if not specified.
      # serviceName: mongo-rs-svc
      # get them with kubectl -n default get secret mongo-rs-fixit-admin-user -o json  and then decode them with echo "value" | base64 -d
      # standard connection string-> mongodb://admin-user:password@mongo-rs-0.mongo-rs-svc.default.svc.cluster.local:27017/fixit?replicaSet=mongo-rs&ssl=false
      # stadard srv connection string -> mongodb+srv://admin-user:password@mongo-rs-svc.default.svc.cluster.local/fixit?replicaSet=mongo-rs&ssl=false

      template:
        spec:
          automountServiceAccountToken: false
          securityContext:
            privileged: false
            allowPrivilegeEscalation: false
            runAsNonRoot: true
            runAsUser: 1000
            readOnlyRootFilesystem: true
          containers:
            - name: mongod
              resources:
                limits:
                  cpu: {{ .Values.resources.mongod.limits.cpu}}
                  memory: {{ .Values.resources.mongod.limits.memory}}
                requests:
                  cpu: {{ .Values.resources.mongod.requests.cpu}}
                  memory: {{ .Values.resources.mongod.requests.memory}}
            - name: mongodb-agent
              resources:
                limits:
                  cpu: {{ .Values.resources.mongodb_agent.limits.cpu}}
                  memory: {{ .Values.resources.mongodb_agent.limits.memory}}
                requests:
                  cpu: {{ .Values.resources.mongodb_agent.requests.cpu}}
                  memory: {{ .Values.resources.mongodb_agent.requests.memory}}
          # nodeSelector:
          #   server-type: mongodb
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      - key: app
                        operator: In
                        values:
                          - mongo-replicaset
                  topologyKey: 'kubernetes.io/hostname'

      volumeClaimTemplates:
        - metadata:
            name: {{ .Values.volume_claim_templates.data.name}} #data-volume
          spec:
            accessModes:
              - {{ .Values.volume_claim_templates.data.access_mode}} #ReadWriteOnce
              # - ReadWriteMany
            storageClassName: {{ .Values.storage_class.data.name}} #mongo-sc-data
            resources:
              requests:
                storage: {{ .Values.volume_claim_templates.data.storage}} #16Gi
        - metadata:
            name: {{ .Values.volume_claim_templates.logs.name}} #logs-volume
          spec:
            accessModes:
              - {{ .Values.volume_claim_templates.logs.access_mode}} #ReadWriteOnce
              # - ReadWriteMany
            storageClassName: {{ .Values.storage_class.logs.name}} #mongo-sc-logs
            resources:
              requests:
                storage: {{ .Values.volume_claim_templates.logs.storage}} #4Gi
spron-in commented 9 months ago

Hello @vinnytwice ,

sorry it sat so long here. I would think that forums.percona.com would be a much better place to discuss it. Anyway, answering your questions/concerns one by one:

  1. It is not possible yet, but we have it in our roadmap.

  2. We do support it, but indeed it is not mentioned properly in the default cr.yaml and not properly documented. You can use it as follows:

    spec:
    replsets:
    - name: rs0
     podSecurityContext:
       fsGroup: 1001
       supplementalGroups: [1001, 1002, 1003]
  3. It is another gap that we have in the roadmap. Right now you can provision a DB and create one right away after it is provisioned.

  4. You can use GCP through S3-protocol. It works perfectly. Then you can consume it from Firebase as a bucket.

  5. I'm not sure I understand the question.