helm / charts

⚠️(OBSOLETE) Curated applications for Kubernetes
Apache License 2.0
15.49k stars 16.82k forks source link

[stable/postgresql] Can't log into user account #16251

Closed ad-si closed 4 years ago

ad-si commented 5 years ago

Describe the bug

I have stable/postgres as a requirement for my chart. Then I install it with:

helm install -f values.yaml .

values.yaml contains:

postgresqlDatabase: my-database
postgresqlUsername: postgres
postgresqlPassword: secret

global:
  postgresql:
    postgresqlDatabase: my-database
    postgresqlUsername: postgres
    postgresqlPassword: secret

When I now attach to the postgres pod with kubectl exec -it my-release-postgresql-0 bash and try to log in with the password secret it doesn't work:

$ psql -U postgres
Password for user postgres:
psql: FATAL:  password authentication failed for user "postgres"

Version of Helm and Kubernetes:

$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
$ docker --version
Docker version 19.03.1, build 74b1e89

Any ideas?

ad-si commented 5 years ago

Ah, I can get the password with

kubectl get secret my-release-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode

But I can't log in with this password either 🤔

alemorcuq commented 5 years ago

Hi, @ad-si.

I've just tried deploying a PostgresSQL chart and I can login using the password provided by the secret:

$ helm install .
NAME:   interested-chinchilla
LAST DEPLOYED: Tue Aug 13 07:15:25 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                              TYPE    DATA  AGE
interested-chinchilla-postgresql  Opaque  1     0s

==> v1/Service
NAME                                       TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
interested-chinchilla-postgresql           ClusterIP  10.30.249.103  <none>       5432/TCP  0s
interested-chinchilla-postgresql-headless  ClusterIP  None           <none>       5432/TCP  0s

==> v1beta2/StatefulSet
NAME                              READY  AGE
interested-chinchilla-postgresql  0/1    0s
$ kubectl get secret --namespace default interested-chinchilla-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode
lWIq3w9tpl
$ kubectl exec -ti interested-chinchilla-postgresql-0 bash
I have no name!@interested-chinchilla-postgresql-0:/$ psql -U postgres
Password for user postgres:
psql (11.5)
Type "help" for help.

postgres=#

Can you provide more context into this? Maybe you are trying to connect from the outside and you need to specify the host?

Regards, Alejandro

ad-si commented 5 years ago

Thanks for the confirmation @alemorcuq. I was digging a little more and something seems really off. When I run:

$ helm install .

I can log into Postgres, but when I run

$ helm install --name my-custom-name .

I can't. What is going on here? 😳

For clarification: I'm talking about a custom chart, where Postgres is a requirement.

alemorcuq commented 5 years ago

Hi, @ad-si,

Let's try to debug this a bit. Can you check the variables that are being used to deploy postgres? For example:

$ kubectl get po interested-chinchilla-postgresql-0 -o yaml
[...]
  containers:
  - env:
    - name: BITNAMI_DEBUG
      value: "false"
    - name: POSTGRESQL_PORT_NUMBER
      value: "5432"
    - name: POSTGRESQL_VOLUME_DIR
      value: /bitnami/postgresql
    - name: PGDATA
      value: /bitnami/postgresql/data
    - name: POSTGRES_USER
      value: postgres
    - name: POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          key: postgresql-password
          name: interested-chinchilla-postgresql
[...]

And then check the secret:

$ kubectl get secret interested-chinchilla-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode
lWIq3w9tpl

Maybe there's something odd there. Let me know if you see anything wrong.

Regards, Alejandro

deepanshululla commented 5 years ago

Did you create a user with your credentials that you are passing in the charts as part of the docker image.

happysalada commented 5 years ago

I get the same problem Looking at the logs

 pg-postgresql-0 pg-postgresql FATAL password authentication failed for user "jin"
Sep 8 16:13:58 pg-postgresql-0 pg-postgresql GMT [562] DETAIL:  Role "jin" does not exist.
    Connection matched pg_hba.conf line 1: "host     all             all             0.0.0.0/0               md5" 

that is when I try to create a special user if I just go with the default user postgres here is what I get

43 pg-postgresql-0 pg-postgresql FATAL password authentication failed for user "postgres"
Sep 8 16:06:43 pg-postgresql-0 pg-postgresql GMT [556] DETAIL:  User "postgres" has no password assigned.
    Connection matched pg_hba.conf line 1: "host     all             all             0.0.0.0/0               md5" 

all the right environment variables are set

  containers:
  - env:
    - name: BITNAMI_DEBUG
      value: "false"
    - name: POSTGRESQL_PORT_NUMBER
      value: "5432"
    - name: POSTGRESQL_VOLUME_DIR
      value: /bitnami/postgresql
    - name: PGDATA
      value: /bitnami/postgresql/data
    - name: POSTGRES_REPLICATION_MODE
      value: master
    - name: POSTGRES_REPLICATION_USER
      value: repl_user
    - name: POSTGRES_REPLICATION_PASSWORD
      valueFrom:
        secretKeyRef:
          key: postgresql-replication-password
          name: postgres-postgresql
    - name: POSTGRES_CLUSTER_APP_NAME
      value: my_application
    - name: POSTGRES_USER
      value: postgres
    - name: POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          key: postgresql-password
          name: postgres-postgresql
    - name: POSTGRES_DB
      value: jin_2

and I see the right secret using kubectl get secret --namespace default postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode

happysalada commented 5 years ago

Here is my values.yaml for reference it's the default one with replication, a username, password and my storageclass. (I verified that the pvc bound properly)

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:
  postgresql: {}
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass

## Bitnami PostgreSQL image version
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
##
image:
  registry: docker.io
  repository: bitnami/postgresql
  tag: 11.5.0-debian-9-r26
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## It turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  debug: false

## String to partially override postgresql.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override postgresql.fullname template
##
# fullnameOverride:

##
## Init containers parameters:
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
volumePermissions:
  enabled: true
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: stretch
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  ## Init container Security Context
  securityContext:
    runAsUser: 0

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

## Pod Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  enabled: false
  ## Name of an already existing service account. Setting this value disables the automatic service account creation.
  # name:

replication:
  enabled: true
  user: repl_user
  password: repl_password
  slaveReplicas: 1
  ## Set synchronous commit mode: on, off, remote_apply, remote_write and local
  ## ref: https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-WAL-LEVEL
  synchronousCommit: "off"
  ## From the number of `slaveReplicas` defined above, set the number of those that will have synchronous replication
  ## NOTE: It cannot be > slaveReplicas
  numSynchronousReplicas: 0
  ## Replication Cluster application name. Useful for defining multiple replication policies
  applicationName: my_application

## PostgreSQL admin user
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
postgresqlUsername: postgres

## PostgreSQL password
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
##
postgresqlPassword: "c43b8415ASDFASDFSDF"

## PostgreSQL password using existing secret
## existingSecret: secret

## Mount PostgreSQL secret as a file instead of passing environment variable
# usePasswordFile: false

## Create a database
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run
##
postgresqlDatabase: jin_2

## PostgreSQL data dir
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
postgresqlDataDir: /bitnami/postgresql/data

## Specify extra initdb args
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
# postgresqlInitdbArgs:

## Specify a custom location for the PostgreSQL transaction log
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
# postgresqlInitdbWalDir:

## PostgreSQL configuration
## Specify runtime configuration parameters as a dict, using camelCase, e.g.
## {"sharedBuffers": "500MB"}
## Alternatively, you can put your postgresql.conf under the files/ directory
## ref: https://www.postgresql.org/docs/current/static/runtime-config.html
##
# postgresqlConfiguration:

## PostgreSQL extended configuration
## As above, but _appended_ to the main configuration
## Alternatively, you can put your *.conf under the files/conf.d/ directory
## https://github.com/bitnami/bitnami-docker-postgresql#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf
##
# postgresqlExtendedConf:

## PostgreSQL client authentication configuration
## Specify content for pg_hba.conf
## Default: do not create pg_hba.conf
## Alternatively, you can put your pg_hba.conf under the files/ directory
# pgHbaConfiguration: |-
#   local all all trust
#   host all all localhost trust
#   host mydatabase mysuser 192.168.0.0/24 md5

## ConfigMap with PostgreSQL configuration
## NOTE: This will override postgresqlConfiguration and pgHbaConfiguration
# configurationConfigMap:

## ConfigMap with PostgreSQL extended configuration
# extendedConfConfigMap:

## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
#   my_init_script.sh: |
#      #!/bin/sh
#      echo "Do something."

## ConfigMap with scripts to be run at first boot
## NOTE: This will override initdbScripts
# initdbScriptsConfigMap:

## Secret with scripts to be run at first boot (in case it contains sensitive information)
## NOTE: This can work along initdbScripts or initdbScriptsConfigMap
# initdbScriptsSecret:

## Optional duration in seconds the pod needs to terminate gracefully.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
# terminationGracePeriodSeconds: 30

## PostgreSQL service configuration
service:
  ## PosgresSQL service type
  type: ClusterIP
  # clusterIP: None
  port: 5432

  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  # nodePort:

  ## Provide any additional annotations which may be required. This can be used to
  annotations: {}
  ## Set the LoadBalancer service type to internal only.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
  ##
  # loadBalancerIP:

  ## Load Balancer sources
  ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ##
  # loadBalancerSourceRanges:
  # - 10.10.10.0/24

## PostgreSQL data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
##   set, choosing the default provisioner.  (gp2 on AWS, standard on
##   GKE, AWS & OpenStack)
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## If defined, PVC must be created manually before volume will be bound
  ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart
  ##
  # existingClaim:

  ## The path the volume will be mounted at, useful when using different
  ## PostgreSQL images.
  ##
  mountPath: /bitnami/postgresql

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  storageClass: "postgres-pv"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

## updateStrategy for PostgreSQL StatefulSet and its slaves StatefulSets
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
  type: RollingUpdate

##
## PostgreSQL Master parameters
##
master:
  ## Node, affinity and tolerations labels for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
  nodeSelector: {}
  affinity: {}
  tolerations: []
  podLabels: {}
  podAnnotations: {}
  ## Additional PostgreSQL Master Volume mounts
  ##
  extraVolumeMounts: []
  ## Additional PostgreSQL Master Volumes
  ##
  extraVolumes: []

##
## PostgreSQL Slave parameters
##
slave:
  ## Node, affinity and tolerations labels for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
  nodeSelector: {}
  affinity: {}
  tolerations: []
  podLabels: {}
  podAnnotations: {}
  ## Additional PostgreSQL Slave Volume mounts
  ##
  extraVolumeMounts: []
  ## Additional PostgreSQL Slave Volumes
  ##
  extraVolumes: []

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  requests:
    memory: 256Mi
    cpu: 250m

networkPolicy:
  ## Enable creation of NetworkPolicy resources.
  ##
  enabled: false

  ## The Policy model to apply. When set to false, only pods with the correct
  ## client label will have network access to the port PostgreSQL is listening
  ## on. When true, PostgreSQL will accept connections from any source
  ## (with the correct destination port).
  ##
  allowExternal: true

## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

readinessProbe:
  enabled: true
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

## Configure metrics exporter
##
metrics:
  enabled: false
  # resources: {}
  service:
    type: ClusterIP
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9187"
    loadBalancerIP:
  serviceMonitor:
    enabled: false
    additionalLabels: {}
    # namespace: monitoring
    # interval: 30s
    # scrapeTimeout: 10s
  image:
    registry: docker.io
    repository: bitnami/postgres-exporter
    tag: 0.5.1-debian-9-r41
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  ## Pod Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: false
    runAsUser: 1001
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ## Configure extra options for liveness and readiness probes
  livenessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

# Define custom environment variables to pass to the image here
extraEnv: {}
happysalada commented 5 years ago

Researching a bit, it seems that the environment variables for the underlying bitnami images have changed name instead of POSTGRES_USER it should be POSTGRESQL_USER looking at the doc I will try a little later by changing the environement variable names and see if that fixes the problem let me know if you see something I missed.

happysalada commented 5 years ago

Researching a bit, it seems that the environment variables for the underlying bitnami images have changed name instead of POSTGRES_USER it should be POSTGRESQL_USER looking at the doc I will try a little later by changing the environement variable names and see if that fixes the problem let me know if you see something I missed.

happysalada commented 5 years ago

bingo, that commit broke the bitnami image support https://github.com/helm/charts/commit/2d50b559445eccb75af97beffa1502417211d688#diff-7cc3f808fcb264d3efe83e85db9b5ef2 There is even a comment about somebody noticing it.

happysalada commented 5 years ago

lol, if that is correct, the bug has been there for 6 months.

alemorcuq commented 5 years ago

Hi, @happysalada.

Thank you for your detailed messages and for taking the time to investigate this issue.

Regarding https://github.com/helm/charts/commit/2d50b559445eccb75af97beffa1502417211d688#diff-7cc3f808fcb264d3efe83e85db9b5ef2, the bitnami/postgresql container has all the official official PostgreSQL container environment variables aliased so it can work with either of them. You can see the code here.

I've tried to reproduce your issue using the same values.yaml file you provided (minus the custom storageClass) and it worked for me. Here's what I did:

$ helm repo update
$ helm install stable/postgresql -f values.yaml

Both the master and the slave start:

$ kubectl logs postgres-postgresql-master-0
postgresql 13:11:41.70
postgresql 13:11:41.73 Welcome to the Bitnami postgresql container
postgresql 13:11:41.73 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 13:11:41.73 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 13:11:41.73 Send us your feedback at containers@bitnami.com
postgresql 13:11:41.73
postgresql 13:11:41.84 INFO  ==> ** Starting PostgreSQL setup **
postgresql 13:11:41.89 INFO  ==> Validating settings in POSTGRESQL_* env vars..
postgresql 13:11:41.89 INFO  ==> Initializing PostgreSQL database...
postgresql 13:11:41.91 INFO  ==> postgresql.conf file not detected. Generating it...
postgresql 13:11:41.95 INFO  ==> pg_hba.conf file not detected. Generating it...
postgresql 13:11:43.80 INFO  ==> Starting PostgreSQL in background...
postgresql 13:11:44.23 INFO  ==> Changing password of postgres
postgresql 13:11:44.25 INFO  ==> Creating replication user repl_user
postgresql 13:11:44.27 INFO  ==> Configuring replication parameters
postgresql 13:11:44.29 INFO  ==> Configuring fsync
postgresql 13:11:44.31 INFO  ==> Loading custom scripts...
postgresql 13:11:44.31 INFO  ==> Enabling remote connections
postgresql 13:11:44.33 INFO  ==> Stopping PostgreSQL...

postgresql 13:11:45.34 INFO  ==> ** PostgreSQL setup finished! **
postgresql 13:11:45.39 INFO  ==> ** Starting PostgreSQL **
2019-09-11 13:11:45.404 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2019-09-11 13:11:45.404 GMT [1] LOG:  listening on IPv6 address "::", port 5432
2019-09-11 13:11:45.408 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-09-11 13:11:45.421 GMT [199] LOG:  database system was shut down at 2019-09-11 13:11:44 GMT
2019-09-11 13:11:45.426 GMT [1] LOG:  database system is ready to accept connections
$ kubectl logs postgres-postgresql-slave-0
postgresql 13:11:47.07
postgresql 13:11:47.07 Welcome to the Bitnami postgresql container
postgresql 13:11:47.07 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 13:11:47.08 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 13:11:47.08 Send us your feedback at containers@bitnami.com
postgresql 13:11:47.08
postgresql 13:11:47.17 INFO  ==> ** Starting PostgreSQL setup **
postgresql 13:11:47.23 INFO  ==> Validating settings in POSTGRESQL_* env vars..
postgresql 13:11:47.24 INFO  ==> Initializing PostgreSQL database...
postgresql 13:11:47.25 INFO  ==> postgresql.conf file not detected. Generating it...
postgresql 13:11:47.37 INFO  ==> pg_hba.conf file not detected. Generating it...
postgresql 13:11:47.41 INFO  ==> Waiting for replication master to accept connections (60 timeout)...
postgres-postgresql:5432 - accepting connections
postgresql 13:11:48.45 INFO  ==> Replicating the initial database
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/2000028 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_216"
    0/31353 kB (0%), 0/1 tablespace (...ami/postgresql/data/backup_label)
 4266/31353 kB (13%), 0/1 tablespace (...tgresql/data/base/13067/3602_fsm)
31362/31362 kB (100%), 0/1 tablespace (...ostgresql/data/global/pg_control)
31362/31362 kB (100%), 1/1 tablespace
pg_basebackup: write-ahead log end point: 0/20000F8
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: base backup completed
postgresql 13:11:49.76 INFO  ==> Configuring replication parameters
postgresql 13:11:49.79 INFO  ==> Configuring fsync
postgresql 13:11:49.79 INFO  ==> Setting up streaming replication slave...
postgresql 13:11:49.81 INFO  ==> Loading custom scripts...
postgresql 13:11:49.81 INFO  ==> Enabling remote connections
postgresql 13:11:49.82 INFO  ==> Stopping PostgreSQL...
postgresql 13:11:49.83 INFO  ==> ** PostgreSQL setup finished! **

postgresql 13:11:49.90 INFO  ==> ** Starting PostgreSQL **
2019-09-11 13:11:50.087 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2019-09-11 13:11:50.087 GMT [1] LOG:  listening on IPv6 address "::", port 5432
2019-09-11 13:11:50.094 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-09-11 13:11:50.110 GMT [168] LOG:  database system was interrupted; last known up at 2019-09-11 13:11:48 GMT
2019-09-11 13:11:50.201 GMT [168] LOG:  entering standby mode
2019-09-11 13:11:50.206 GMT [168] LOG:  redo starts at 0/2000028
2019-09-11 13:11:50.208 GMT [168] LOG:  consistent recovery state reached at 0/20000F8
2019-09-11 13:11:50.209 GMT [1] LOG:  database system is ready to accept read only connections
2019-09-11 13:11:50.219 GMT [172] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1

The secret gives me the password you set in the values.yaml:

$ kubectl get secret postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode
c43b8415ASDFASDFSDF

And I can connect with it to the database:

$ kubectl exec -ti postgres-postgresql-master-0 bash
I have no name!@postgres-postgresql-master-0:/$ PGPASSWORD=c43b8415ASDFASDFSDF psql -U postgres
psql (11.5)
Type "help" for help.

postgres=#

Are trying to restore some persisted data instead of deploying from scratch? Maybe something broke recently and we didn't notice it. Is that the case? Can you post the full logs of the containers failing? Do you remember which versions of the containers you were previously using?

Also, can you try following my steps from scratch to try and narrow the problem a bit more?

Regards, Alejandro

happysalada commented 5 years ago

Thanks for your response!

I guess it has to be the custom storage then.

doing exactly what you did, here is what I get for the master logs

braided-parrot-postgresql postgresql 19:05:26.91 INFO  ==> ** Starting PostgreSQL **                                                                                              │
│ braided-parrot-postgresql 2019-09-11 19:05:26.926 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432                                                                    │
│ braided-parrot-postgresql 2019-09-11 19:05:26.926 GMT [1] LOG:  listening on IPv6 address "::", port 5432                                                                         │
│ braided-parrot-postgresql 2019-09-11 19:05:26.935 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"                                                                     │
│ braided-parrot-postgresql 2019-09-11 19:05:26.956 GMT [152] LOG:  database system was shut down at 2019-09-08 23:36:20 GMT                                                        │
│ braided-parrot-postgresql 2019-09-11 19:05:26.956 GMT [152] LOG:  invalid record length at 0/1653530: wanted 24, got 0                                                            │
│ braided-parrot-postgresql 2019-09-11 19:05:26.956 GMT [152] LOG:  invalid primary checkpoint record                                                                               │
│ braided-parrot-postgresql 2019-09-11 19:05:26.956 GMT [152] PANIC:  could not locate a valid checkpoint record                                                                    │
│ braided-parrot-postgresql 2019-09-11 19:05:27.057 GMT [1] LOG:  startup process (PID 152) was terminated by signal 6: Aborted 
raided-parrot-postgresql 2019-09-11 19:05:27.057 GMT [1] LOG:  aborting startup due to startup process failure                                                                   │
│ braided-parrot-postgresql 2019-09-11 19:05:27.063 GMT [1] LOG:  database system is shut down

for the slave

braided-parrot-postgresql 2019-09-11 19:06:28.917 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432                                                                    │
│ braided-parrot-postgresql 2019-09-11 19:06:28.917 GMT [1] LOG:  listening on IPv6 address "::", port 5432                                                                         │
│ braided-parrot-postgresql 2019-09-11 19:06:28.922 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"                                                                     │
│ braided-parrot-postgresql 2019-09-11 19:06:28.936 GMT [155] LOG:  database system was shut down at 2019-09-08 23:36:20 GMT                                                        │
│ braided-parrot-postgresql 2019-09-11 19:06:28.936 GMT [155] LOG:  invalid record length at 0/1653530: wanted 24, got 0                                                            │
│ braided-parrot-postgresql 2019-09-11 19:06:28.936 GMT [155] LOG:  invalid primary checkpoint record                                                                               │
│ braided-parrot-postgresql 2019-09-11 19:06:28.936 GMT [155] PANIC:  could not locate a valid checkpoint record                                                                    │
│ braided-parrot-postgresql 2019-09-11 19:06:29.043 GMT [1] LOG:  startup process (PID 155) was terminated by signal 6: Aborted                                                     │
│ braided-parrot-postgresql 2019-09-11 19:06:29.043 GMT [1] LOG:  aborting startup due to startup process failure                                                                   │
│ braided-parrot-postgresql 2019-09-11 19:06:29.047 GMT [1] LOG:  database system is shut down

I'll give it a go with a different storage class

happysalada commented 5 years ago

ok, it was the storage class. My bad, sorry for wasting your time. As far as I'm concerned, this issue can be closed.

alemorcuq commented 5 years ago

Glad you fixed it, @happysalada. Let us know if you find any other issues in the future.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

heckad commented 4 years ago

I have the same problem. When will it fix?

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

alemorcuq commented 4 years ago

I have the same problem. When will it fix?

This issue is solved, you can read through the thread to understand the problem and the solution. If you still have problems, please open a new ticket detailing your case so we can help you there.

heckad commented 4 years ago

Another reason may be that the user created a Postgres and then deleted along with pv and then create another, but delete pv not delete the directory and if you recreate postgres and pv then the password will be old because Postgres will be used old data-dir.

irfn commented 4 years ago

I am seeing this issue in default install as well used helm3 helm install somedb stable/postgresql The password in the secret generated is not working for postgres user.

Some logs seen

019-12-12 09:05:02.448 GMT [196] FATAL: password authentication failed for user "postgres" 2019-12-12 09:05:02.448 GMT [196] DETAIL: Password does not match for user "postgres". Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"

when i skip the name helm install stable/postgresql --generate-name the login works.

alemorcuq commented 4 years ago

That's probably because you have a PVC from a previous deployment named somedb that is configured with a different password, @irfn.

irfn commented 4 years ago

@alemorcuq Thanks that was the issue.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

polRk commented 4 years ago

Install via helm3

kubectl exec -ti postgresql-master-0 bash I have no name!@postgresql-master-0:/$ PGPASSWORD=NXkmda3WdI psql -U postgres psql: FATAL: password authentication failed for user "postgres"

alemorcuq commented 4 years ago

Hi, @polRk.

I'm not able to reproduce your issue:

$ echo $(kubectl get secret --namespace default amorenopsql-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
tvb8jjONhJ

$ kubectl exec -ti amorenopsql-postgresql-0 bash
I have no name!@amorenopsql-postgresql-0:/$ PGPASSWORD=tvb8jjONhJ psql -U postgres
psql (11.6)
Type "help" for help.

postgres=#

Usually this kind of error happens because you have an existing PVC from a previous deployment that is configured with a different password. Could you check that?

polRk commented 4 years ago

Usually this kind of error happens because you have an existing PVC from a previous deployment that is configured with a different password. Could you check that?

Yes ) I did delete all pvc and i did recreate helm chart. No more problems. Thank You!

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

denisw commented 4 years ago

I hit this issue as well. It seems as if recent versions of the Bitnami PostgreSQL Docker image expect POSTGRESQL_PASSWORD as environment variable name whereas the Helm chart sets POSTGRES_PASSWORD instead.

dani8art commented 4 years ago

Hi @denisw thank you for your feedback, sorry for that, maybe it is a bit confusing but we can pass the value in both ways POSTGRESQL_PASSWORD and POSTGRES_PASSWORD we implemented them as aliases for this password, you can find the documentation here: https://github.com/bitnami/bitnami-docker-postgresql#environment-variables-aliases

Thanks Regards.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

stale[bot] commented 4 years ago

This issue is being automatically closed due to inactivity.

satbirmalhi commented 3 years ago

Researching a bit, it seems that the environment variables for the underlying bitnami images have changed name instead of POSTGRES_USER it should be POSTGRESQL_USER

This worked for me. Thanks

tbrodbeck commented 2 years ago

Would be great if a helm uninstall would delete the pvc as well! This caused a lot of confusion :D Please, reopen this issue!