bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.6k stars 8.98k forks source link

[bitnami/postgresql-ha] password authentication failed for user "repmgr" #1738

Closed bsakweson closed 4 years ago

bsakweson commented 4 years ago

Which chart: bitnami/postgresql-ha 1.1.0, postgresql version 11.6.0 Chart for PostgreSQL with HA architecture (using Replicat...

Description

I basically did not do much here, I just setup my parameters as shown in my customized values.yaml shown below

Steps to reproduce the issue:

  1. Prepared my values.yaml file shown below and run installation

Describe the results you received:

postgresql-repmgr 17:44:47.85
postgresql-repmgr 17:44:47.86 Welcome to the Bitnami postgresql-repmgr container
postgresql-repmgr 17:44:47.86 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql-repmgr
postgresql-repmgr 17:44:47.86 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql-repmgr/issues
postgresql-repmgr 17:44:47.86 Send us your feedback at containers@bitnami.com
postgresql-repmgr 17:44:47.87
postgresql-repmgr 17:44:47.89 INFO  ==> ** Starting PostgreSQL with Replication Manager setup **
repmgr 17:44:47.99 INFO  ==> Validating settings in REPMGR_* env vars...
postgresql 17:44:48.00 INFO  ==> Validating settings in POSTGRESQL_* env vars..
repmgr 17:44:48.01 INFO  ==> Querying all partner nodes for common upstream node...
repmgr 17:44:48.06 INFO  ==> There are no nodes with primary role. Assuming the primary role...
repmgr 17:44:48.07 INFO  ==> Preparing PostgreSQL configuration...
postgresql 17:44:48.07 INFO  ==> postgresql.conf file not detected. Generating it...
repmgr 17:44:48.16 INFO  ==> Preparing repmgr configuration...
repmgr 17:44:48.17 INFO  ==> Initializing Repmgr...
postgresql 17:44:48.18 INFO  ==> Initializing PostgreSQL database...
postgresql 17:44:48.18 INFO  ==> Cleaning stale /bitnami/postgresql/data/postmaster.pid file
postgresql 17:44:48.19 INFO  ==> Custom configuration /opt/bitnami/postgresql/conf/postgresql.conf detected
postgresql 17:44:48.19 INFO  ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
postgresql 17:44:48.21 INFO  ==> Deploying PostgreSQL with persisted data...
postgresql 17:44:48.24 INFO  ==> Stopping PostgreSQL...
postgresql-repmgr 17:44:48.25 INFO  ==> ** PostgreSQL with Replication Manager setup finished! **

postgresql 17:44:48.39 INFO  ==> Starting PostgreSQL in background...
postgresql-repmgr 17:44:49.37 INFO  ==> ** Starting repmgrd **
[2019-12-13 17:44:49] [NOTICE] repmgrd (repmgrd 5.0.0) starting up
[2019-12-13 17:44:49] [ERROR] connection to database failed
[2019-12-13 17:44:49] [DETAIL]
FATAL:  password authentication failed for user "repmgr"

Describe the results you expected:

Installation did not succeed.

values.yaml

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass
#   postgresql:
#     username: customuser
#     password: custompassword
#     database: customdatabase
#     repmgrUsername: repmgruser
#     repmgrPassword: repmgrpassword
#     repmgrDatabase: repmgrdatabase
#     existingSecret: myExistingSecret
#   ldap:
#     bindpw: bindpassword
#     existingSecret: myExistingSecret
#   pgpool:
#     adminUsername: adminuser
#     adminPassword: adminpassword
#     existingSecret: myExistingSecret

## Bitnami PostgreSQL image
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
##
postgresqlImage:
  registry: docker.io
  repository: bitnami/postgresql-repmgr
  tag: 11.6.0-debian-9-r7
  ## Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  ##
  debug: false

## Bitnami Pgpool image
## ref: https://hub.docker.com/r/bitnami/pgpool/tags/
##
pgpoolImage:
  registry: docker.io
  repository: bitnami/pgpool
  tag: 4.1.0-debian-9-r20
  ## Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  ##
  debug: false

## Bitnami Minideb image
## ref: https://hub.docker.com/r/bitnami/pgpool/tags/
##
volumePermissionsImage:
  registry: docker.io
  repository: bitnami/minideb
  tag: latest
  ## Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: Always
  ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

## Bitnami PostgreSQL Prometheus exporter image
## ref: https://hub.docker.com/r/bitnami/pgpool/tags/
##
metricsImage:
  registry: docker.io
  repository: bitnami/postgres-exporter
  tag: 0.8.0-debian-9-r0
  ## Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  ##
  debug: false

## String to partially override postgresql-ha.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override postgresql-ha.fullname template
##
# fullnameOverride:

## Kubernetes Cluster Domain
##
clusterDomain: cluster.local

## PostgreSQL parameters
##
postgresql:
  ## Number of replicas to deploy
  ##
  replicaCount: 2

  ## Update strategy for PostgreSQL statefulset
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
  ##
  updateStrategyType: RollingUpdate

  ## Additional pod annotations
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: {}

  ## Affinity for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}

  ## Node labels for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}

  ## Tolerations for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}

  ## K8s Security Context
  ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: true
    fsGroup: 1001
    runAsUser: 1001

  ## PostgreSQL containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 250m
    #   memory: 256Mi
    requests: {}
    #   cpu: 250m
    #   memory: 256Mi

  ## PostgreSQL container's liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 6
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 6

  ## Pod disruption budget configuration
  ##
  pdb:
    ## Specifies whether a Pod disruption budget should be created
    ##
    create: false
    minAvailable: 1
    # maxUnavailable: 1

  ## PostgreSQL configuration parameters
  ##
  username: ${postgres_username}
  password: ${postgres_password}
  # database:

  ## Upgrade repmgr extension in the database
  ##
  upgradeRepmgrExtension: false

  ## Configures pg_hba.conf to trust every user
  ##
  pgHbaTrustAll: false

  ## Repmgr configuration parameters
  ##
  repmgrUsername: repmgr
  repmgrPassword: ${repmgr_password}
  repmgrDatabase: repmgr
  repmgrLogLevel: NOTICE
  repmgrConnectTimeout: 5
  repmgrReconnectAttempts: 3
  repmgrReconnectInterval: 5

  ## Repmgr configuration
  ## Specify content for repmgr.conf
  ## Default: do not create repmgr.conf
  ## Alternatively, you can put your repmgr.conf under the files/ directory
  ## ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration-file
  ##
  # repmgrConfiguration: |-

  ## PostgreSQL configuration
  ## Specify runtime configuration parameters as a dict, using camelCase, e.g.
  ## {"sharedBuffers": "500MB"}
  ## Alternatively, you can put your postgresql.conf under the files/ directory
  ## ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration-file
  ##
  # configuration:

  ## PostgreSQL client authentication configuration
  ## Specify content for pg_hba.conf
  ## Default: do not create pg_hba.conf
  ## Alternatively, you can put your pg_hba.conf under the files/ directory
  ## ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration-file
  ##
  # pgHbaConfiguration: |-
  #   local all all trust
  #   host all all localhost trust
  #   host mydatabase mysuser 192.168.0.0/24 md5

  ## ConfigMap with PostgreSQL configuration
  ## NOTE: This will override repmgrConfiguration, configuration and pgHbaConfiguration
  ##
  # configurationCM:

  ## PostgreSQL extended configuration
  ## As above, but _appended_ to the main configuration
  ## Alternatively, you can put your *.conf under the files/conf.d/ directory
  ## ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf
  ##
  # extendedConf:

  ## ConfigMap with PostgreSQL extended configuration
  ## NOTE: This will override extendedConf
  ##
  # extendedConfCM:

  ## initdb scripts
  ## Specify dictionary of scripts to be run at first boot
  ## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
  ##
  # initdbScripts:
  #   my_init_script.sh: |
  #      #!/bin/sh
  #      echo "Do something."

  ## ConfigMap with scripts to be run at first boot
  ## NOTE: This will override initdbScripts
  ##
  # initdbScriptsCM:

## Pgpool parameters
##
pgpool:
  ## Number of replicas to deploy
  ##
  replicaCount: 1

  ## Additional pod annotations
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: {}

  ## Affinity for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}

  ## Node labels for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}

  ## Tolerations for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}

  ## K8s Security Context
  ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: true
    fsGroup: 0
    runAsUser: 0

  ## Pgpool containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 250m
    #   memory: 256Mi
    requests: {}
    #   cpu: 250m
    #   memory: 256Mi

  ## Pgpool container's liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5

  ## Pod disruption budget configuration
  ##
  pdb:
    ## Specifies whether a Pod disruption budget should be created
    ##
    create: false
    minAvailable: 1
    # maxUnavailable: 1

  ## Pgpool configuration parameters
  ##
  adminUsername: admin
  # adminPassword:

## LDAP parameters
##
ldap:
  enabled: false
  ## Retrieve LDAP bindpw from existing secret
  ##
  # existingSecret: myExistingSecret
  uri:
  base:
  binddn:
  bindpw:
  bslookup:
  scope:
  tlsReqcert:
  nssInitgroupsIgnoreusers: root,nslcd

## Init Container paramaters
##
volumePermissions:
  enabled: true
  ## K8s Security Context
  ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    runAsUser: 0
  ## Init container' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 100m
    #   memory: 128Mi
    requests: {}
    #   cpu: 100m
    #   memory: 128Mi

## PostgreSQL Prometheus exporter parameters
##
metrics:
  enabled: true
  ## K8s Security Context
  ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: true
    runAsUser: 1001

  ## Prometheus exporter containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 250m
    #   memory: 256Mi
    requests: {}
    #   cpu: 250m
    #   memory: 256Mi

  ## Prometheus exporter container's liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 6
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 6

  ## Annotations for Prometheus exporter
  ##
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9187"

  ## Enable this if you're using Prometheus Operator
  ##
  serviceMonitor:
    enabled: false
    ## Specify a namespace if needed
    # namespace: monitoring
    # fallback to the prometheus default unless specified
    # interval: 10s
    # scrapeTimeout: 10s
    ## Defaults to what's used if you follow CoreOS [Prometheus Install Instructions](https://github.com/helm/charts/tree/master/stable/prometheus-operator#tldr)
    ## [Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator-1)
    ## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
    selector:
      prometheus: kube-prometheus

## Persistence paramaters
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## If defined, PVC must be created manually before volume will be bound
  ## The value is evaluated as a template
  ##
  # existingClaim:
  ## Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ## set, choosing the default provisioner.
  ##
  # storageClass: "-"
  ## The path the volume will be mounted at, useful when using different
  ## PostgreSQL images.
  ##
  mountPath: /bitnami/postgresql
  ## Persistent Volume Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## Persistent Volume Claim size
  ##
  size: ${postgres_volume_size}
  ## Persistent Volume Claim annotations
  ##
  annotations: {}

## PgPool service paramaters
##
service:
  ## Service type
  ##
  type: ClusterIP
  ## Service Port
  ##
  port: 5432
  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  # nodePort:
  ## Set the LoadBalancer service type to internal only.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
  ##
  # loadBalancerIP:
  ## Load Balancer sources
  ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ##
  # loadBalancerSourceRanges:
  # - 10.10.10.0/24
  ## Set the Cluster IP to use
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
  ##
  # clusterIP: None
  ## Provide any additional annotations which may be required
  ##
  annotations: {}

## Ingress paramaters
##
ingress:
  ## Set to true to enable ingress record generation
  enabled: false

  ## Set this to true in order to add the corresponding annotations for cert-manager
  certManager: false

  ## Ingress annotations done as key:value pairs
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
  ##
  ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
  ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
  annotations: {}
  #  kubernetes.io/ingress.class: nginx

  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  hosts:
    - name: postgresql.local
      path: /

  ## The tls configuration for the ingress
  ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  tls:
    - hosts:
        - postgresql.local
      secretName: postgresql.local-tls

  secrets:
  ## If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or
  ## -----BEGIN RSA PRIVATE KEY-----
  ##
  ## name should line up with a tlsSecret set further up
  ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
  ##
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  # - name: airflow.local-tls
  #   key:
  #   certificate:

## NetworkPolicy paramaters
##
networkPolicy:
  enabled: true

  ## The Policy model to apply. When set to false, only pods with the correct
  ## client labels will have network access to the port PostgreSQL is listening
  ## on. When true, PostgreSQL will accept connections from any source
  ## (with the correct destination port).
  ##
  allowExternal: false

Version of Helm and Kubernetes: *NOTE I am running serverless helm

Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:34Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
bsakweson commented 4 years ago

Here is what I think caused that error. I believe I ran it the first time and for reason it failed. I deleted the deployment without deleting the PV and then try to deploy it again using the same PV but this time around and several times afterwards it failed. After I deleted the PV I redeployed a fresh deployment, it worked. I'd say remember to delete the PVC if deployment fail before redeploying.

carrodher commented 4 years ago

Thanks for letting us know, passwords are stored in the PV (in a secure way), as the password is randomly generated (or if you manually specify a different password) you need to delete the PVC in order to match the new password.

bigbitbus commented 4 years ago

Hi

If I delete the PV (to deal with the repmgr ipassword being randomly reset) then would it also delete all the data in the database?

The pods in my AWS-EKS cluster got recreated (when we changed the instance types), now I am getting this log line

[2020-04-22 13:25:52] [DETAIL] attempted to connect using:
  user=repmgr password=*******redacted****** connect_timeout=5 dbname=repmgr host=hapgdb-postgresql-ha-postgresql-0.hapgdb-postgresql-ha-postgresql-headless.hapgdb.svc.cluster.local port=5432 fallback_application_name=repmgr

Thanks

marcosbc commented 4 years ago

It should be fine for pods to be recreated. However, if you delete the data in the PV you would lose the data.

Note that the passwords/secrets are set initially when creating the deployments/statefulsets. If you have launched your chart with existing PVs and using random passwords, it is very likely they are different.

bigbitbus commented 4 years ago

I have a similar issue popping up; my case was that I didnt touch the PVs or PVCs; I just resized the nodes (different T-shirt sizes in my AWS EKS Kubernetes Cluster). So yes, all pods got re-created.

postgresql-repmgr 12:46:46.17 
postgresql-repmgr 12:46:46.17 Welcome to the Bitnami postgresql-repmgr container
postgresql-repmgr 12:46:46.17 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql-repmgr
postgresql-repmgr 12:46:46.18 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql-repmgr/issues
postgresql-repmgr 12:46:46.18 Send us your feedback at containers@bitnami.com
postgresql-repmgr 12:46:46.18 
postgresql-repmgr 12:46:46.19 INFO  ==> ** Starting PostgreSQL with Replication Manager setup **
repmgr 12:46:46.24 INFO  ==> Validating settings in REPMGR_* env vars...
postgresql 12:46:46.25 INFO  ==> Validating settings in POSTGRESQL_* env vars..
repmgr 12:46:46.25 INFO  ==> Querying all partner nodes for common upstream node...
repmgr 12:46:46.29 INFO  ==> There are no nodes with primary role. Assuming the primary role...
repmgr 12:46:46.30 INFO  ==> Preparing PostgreSQL configuration...
postgresql 12:46:46.30 INFO  ==> postgresql.conf file not detected. Generating it...
repmgr 12:46:46.35 INFO  ==> Preparing repmgr configuration...
repmgr 12:46:46.36 INFO  ==> Initializing Repmgr...
postgresql 12:46:46.36 INFO  ==> Initializing PostgreSQL database...
postgresql 12:46:46.36 INFO  ==> Cleaning stale /bitnami/postgresql/data/postmaster.pid file
postgresql 12:46:46.37 INFO  ==> Custom configuration /opt/bitnami/postgresql/conf/postgresql.conf detected
postgresql 12:46:46.37 INFO  ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
postgresql 12:46:46.38 INFO  ==> Deploying PostgreSQL with persisted data...
postgresql 12:46:46.39 INFO  ==> Stopping PostgreSQL...
postgresql-repmgr 12:46:46.40 INFO  ==> ** PostgreSQL with Replication Manager setup finished! **

postgresql 12:46:46.45 INFO  ==> Starting PostgreSQL in background...
postgresql-repmgr 12:46:46.57 INFO  ==> ** Starting repmgrd **
[2020-05-09 12:46:46] [NOTICE] repmgrd (repmgrd 5.0.0) starting up
[2020-05-09 12:46:46] [ERROR] connection to database failed
[2020-05-09 12:46:46] [DETAIL] 
FATAL:  password authentication failed for user "repmgr"

[2020-05-09 12:46:46] [DETAIL] attempted to connect using:
  user=repmgr password=******** connect_timeout=5 dbname=repmgr host=hapgdb-postgresql-ha-postgresql-0.hapgdb-postgresql-ha-postgresql-headless.hapgdb.svc.cluster.local port=5432 fallback_application_name=repmgr

The repmgr_password was randomly generated in the installation.

marcosbc commented 4 years ago

Could you share some steps so we can reproduce this issue from a clean Helm chart deployment?

I tried to reproduce by deploying and suddenly removing all nodes at once a few times, but the chart keeps working properly.

bigbitbus commented 4 years ago

Thanks for your message.

In order to reproduce the error (in Amazon EKS at least),

  1. Create a HA cluster using the default values.yaml file.

  2. Delete all the nodes in the K8s cluster (note all block storage comes from EBS volumes so those are not deleted). Deleting all nodes simply takes out all pods.

  3. Recreate the nodes so Kubernetes can start re-creating the pods.

The database will not mount; it will complain about the repmgr password as shown in the logs I posted earlier.

I will try later today with explicitly setting the repmgr password in the values.yaml file and then report if that solves the problem.

Thanks

On Mon, May 11, 2020 at 8:46 AM Marcos Bjoerkelund notifications@github.com wrote:

Could you share some steps so we can reproduce this issue from a clean Helm chart deployment?

I tried to reproduce by deploying and suddenly removing all nodes at once a few times, but the chart keeps working properly.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bitnami/charts/issues/1738#issuecomment-626679609, or unsubscribe https://github.com/notifications/unsubscribe-auth/AELYTVE66563GI7NGSTTXCLRQ7XSJANCNFSM4J2SDE6A .

-- Sachin Agarwal BigBitBus Inc. http://www.BigBitBus.com

LinkedIn: https://ca.linkedin.com/in/sachinkagarwal

MrAmbiG commented 4 years ago

Same issue here Uninstalled helm chart. Deleted pvc. Delete namespace Recreated namespace. Reinstalled helm chart. user=postrepmgr password=mycustompass connect_timeout=5 dbname=postrepdb host=mycompany-postgresql-ha-postgresql-0.mycompany-postgresql-ha-postgresql-headless.mycompany.svc.cluster.local port=5432 fallback_application_name=repmgr

marcosbc commented 4 years ago

Hi @MrAmbiG, could you share how you deployed the Helm chart? (including any custom values in values.yaml or --set options).

Note that the Bitnami PostgreSQL HA chart creates two secrets by default, postgresql-password and repmgr-password.

If you don't set the values for those passwords in values.yaml or via --set, they will be overridden/changed for each deployment, meaning if you deploy a new PostgreSQL HA chart with the previous volume, using old (correct) credentials, it would try to connect with the new (wrong) credentials, and therefore the authentication fails.

MrAmbiG commented 4 years ago

Hi @MrAmbiG, could you share how you deployed the Helm chart? (including any custom values in values.yaml or --set options).

Note that the Bitnami PostgreSQL HA chart creates two secrets by default, postgresql-password and repmgr-password.

If you don't set the values for those passwords in values.yaml or via --set, they will be overridden/changed for each deployment, meaning if you deploy a new PostgreSQL HA chart with the previous volume, using old (correct) credentials, it would try to connect with the new (wrong) credentials, and therefore the authentication fails.

Everytime i deployed/redeployed, I made sure to delete pvc and the namespace that it was deployed to, after deleting the helm chart itself. So, there is no way it would have tried to use the old volume where old credentials were stored. I tried

  1. setting repmgr credentials (global)
  2. not setting repmgr credentials Due to a business dead line, we went with non ha edition of the helm chart and I have thus not saved the old values.yaml file to share.
marcosbc commented 4 years ago

In that case, it seems like your issue is not when recreating the Helm chart, instead even the first chart installation fails for you.

We're sorry to hear it did not work for you. I tried again with the following changed values and the deployment went fine without any issues:

diff --git a/bitnami/postgresql-ha/values.yaml b/bitnami/postgresql-ha/values.yaml
index 565b6e25f..54b5e37af 100644
--- a/bitnami/postgresql-ha/values.yaml
+++ b/bitnami/postgresql-ha/values.yaml
@@ -236,9 +236,9 @@ postgresql:

   ## Repmgr configuration parameters
   ##
-  repmgrUsername: repmgr
-  # repmgrPassword:
-  repmgrDatabase: repmgr
+  repmgrUsername: postrepmgr
+  repmgrPassword: mycustompass
+  repmgrDatabase: postrepdb
   repmgrLogLevel: NOTICE
   repmgrConnectTimeout: 5
   repmgrReconnectAttempts: 3

Even deleting the deployment (including volumes) and re-creating it worked. So it is most likely something related to your Kubernetes environment that is causing these issues.