Closed gerrit8143 closed 3 years ago
Same issue here, for me it happen after updating the configuration. Initial deployment works fine but as soon as I redeploy keycloak (without touching to postgresql) I'm getting the same error org.postgresql.util.PSQLException: FATAL: password authentication failed for user "keycloak"
.
I tought it was because the chart was regenerating a new secret at deployment time so I configured keycloak.persistence.existingSecret
and postgresql.existingSecret
to a known secret before applying the chart but I'm still getting the same exception.
The chart is tested with Postgres, which does work. Here's the values: https://github.com/codecentric/helm-charts/blob/master/charts/keycloak/ci/postgres-ha-values.yaml#L30-L34
It looks like you don't set a password for Postgres. That might be the problem. Could you check that?
yes, password was not set because i thought it would be set automatically when empty:
## PostgreSQL Password for the new user.
## If not set, a random 10 characters password will be used.
##
postgresqlPassword: ""
i now tried with password "keycloak" but same problem:
Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "keycloak"
postgresql:
### PostgreSQL User to create.
##
postgresqlUsername: keycloak
## PostgreSQL Password for the new user.
## If not set, a random 10 characters password will be used.
##
postgresqlPassword: "keycloak"
## PostgreSQL Database to create.
##
postgresqlDatabase: keycloak
## Persistent Volume Storage configuration.
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
##
persistence:
## Enable PostgreSQL persistence using Persistent Volume Claims.
##
enabled: true
i also tried with https://github.com/codecentric/helm-charts/blob/master/charts/keycloak/ci/postgres-ha-values.yaml#L30-L34 but without luck.
I can reproduce the problem if I don't set a password. However, if I set it explicitly, it works fine.
I'd suggest you always set a passsword to avoid it being regenerated on upgrades.
i can't get it working even with postgresql password set and a fresh install with the current values.yaml from the repo:
Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "keycloak"
I did have a $ and ! in the password, using a simple password without special chars seemed to work.
I had the same issue.
I used a sidecar container to access a postgres on cloud sql. I set the password through a secret using persistence.existingSecret
and persistence.existingSecretKey
. However this resulted in the password authentication failed for user
exception.
Setting the password directly over the persistence.dbPassword
fixed it for me.
Is there any explanation for this?
yes @fefi42 and @jansmets , there is a DB_PASSWORD_FILE
parameter available, and I fear we have to use it, otherwise ur passwords (and mine) will be shell interpreted...https://hub.docker.com/r/jboss/keycloak/
I'm having fun with sealedsecrets on top of it... I will provide a pull request
do thank #mobivia and #atu for the sponsoring
I will go into the same /secrets
folder with a different volume name persistence
instead of password
(the current http password) https://blog.sebastian-daschner.com/entries/multiple-kubernetes-volumes-directory
p.s. credits for the hint got to @szottE
@unguiculus can you have a look at my patch? https://github.com/codecentric/helm-charts/pull/164 i might pass through MUC on 12/2 if u wanna go for a beer (on my way back to Berlin)
@zeph Too bad. I'd love to go for a beer but I'm in Nuremberg on Feb 12 speaking at https://www.meetup.com/de-DE/Kubernetes-Nurnberg/events/267907233/.
@unguiculus I can be there... I'll stop at Brenner PASS on the 11th Feb. night... sleep over, start in the morning and be at Nuremberg on that event too. Pass the night there and leave for Berlin on the morning after. See ya there... please now merge my pull request, I'm struggling also on ArgoCD to use the requirements.yaml properly https://github.com/argoproj/argo-cd/issues/3055
I had the same issue, but my problem was that I use inside keycloak information.
keycloak:
postgresql:
postgresqlPassword: keycloak
Postgres need to be outer than the keycloak block.
keycloak:
replicas: 4
postgresql:
postgresqlPassword: keycloak
Maybe this helps other people.
Hey folks, I just thought I'd point out that I had this issue but it only occurred when I attempted to install outside of the default namespace.
Using these values
init:
image:
repository: busybox
tag: 1.31
pullPolicy: IfNotPresent
resources: {}
# limits:
# cpu: "10m"
# memory: "32Mi"
# requests:
# cpu: "10m"
# memory: "32Mi"
clusterDomain: cluster.local
## Optionally override the fully qualified name
# fullnameOverride: keycloak
## Optionally override the name
# nameOverride: keycloak
keycloak:
replicas: 1
image:
repository: docker.io/jboss/keycloak
# Overrides the image tag whose default is the chart version.
tag: ""
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
pullSecrets: []
# - myRegistrKeySecretName
hostAliases: []
# - ip: "1.2.3.4"
# hostnames:
# - "my.host.com"
proxyAddressForwarding: true
enableServiceLinks: false
podManagementPolicy: Parallel
restartPolicy: Always
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
securityContext:
fsGroup: 1000
containerSecurityContext:
runAsUser: 1000
runAsNonRoot: true
## The path keycloak will be served from. To serve keycloak from the root path, use two quotes (e.g. "").
basepath: auth
## Additional init containers, e. g. for providing custom themes
extraInitContainers: |
## Additional sidecar containers, e. g. for a database proxy, such as Google's cloudsql-proxy
extraContainers: |
## lifecycleHooks defines the container lifecycle hooks
lifecycleHooks: |
# postStart:
# exec:
# command: ["/bin/sh", "-c", "ls"]
## Override the default for the Keycloak container, e.g. for clusters with large cache that requires rebalancing.
terminationGracePeriodSeconds: 60
## Additional arguments to start command e.g. -Dkeycloak.import= to load a realm
extraArgs: ""
## Username for the initial Keycloak admin user
username: keycloak
## Password for the initial Keycloak admin user. Applicable only if existingSecret is not set.
## If not set, a random 10 characters password will be used
password: ""
# Specifies an existing secret to be used for the admin password
existingSecret: ""
# The key in the existing secret that stores the password
existingSecretKey: password
## jGroups configuration (only for HA deployment)
jgroups:
exposePort: true
discoveryProtocol: dns.DNS_PING
discoveryProperties: >
"dns_query={{ template "keycloak.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}"
javaToolOptions: >-
-XX:+UseContainerSupport
-XX:MaxRAMPercentage=50.0
## Allows the specification of additional environment variables for Keycloak
extraEnv: |
# - name: KEYCLOAK_LOGLEVEL
# value: DEBUG
# - name: WILDFLY_LOGLEVEL
# value: DEBUG
# - name: CACHE_OWNERS
# value: "2"
# - name: DB_QUERY_TIMEOUT
# value: "60"
# - name: DB_VALIDATE_ON_MATCH
# value: true
# - name: DB_USE_CAST_FAIL
# value: false
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
{{- include "keycloak.selectorLabels" . | nindent 10 }}
matchExpressions:
- key: role
operator: NotIn
values:
- test
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
{{- include "keycloak.selectorLabels" . | nindent 12 }}
matchExpressions:
- key: role
operator: NotIn
values:
- test
topologyKey: failure-domain.beta.kubernetes.io/zone
nodeSelector: {}
priorityClassName: ""
tolerations: []
## Additional pod labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}
## Extra Annotations to be added to pod
podAnnotations: {}
livenessProbe: |
httpGet:
path: {{ if ne .Values.keycloak.basepath "" }}/{{ .Values.keycloak.basepath }}{{ end }}/
port: http
initialDelaySeconds: 300
timeoutSeconds: 5
readinessProbe: |
httpGet:
path: {{ if ne .Values.keycloak.basepath "" }}/{{ .Values.keycloak.basepath }}{{ end }}/realms/master
port: http
initialDelaySeconds: 30
timeoutSeconds: 1
resources: {}
# limits:
# cpu: "100m"
# memory: "1024Mi"
# requests:
# cpu: "100m"
# memory: "1024Mi"
## WildFly CLI configurations. They all end up in the file 'keycloak.cli' configured in the configmap which is
## executed on server startup.
cli:
enabled: true
nodeIdentifier: |
{{ .Files.Get "scripts/node-identifier.cli" }}
logging: |
{{ .Files.Get "scripts/logging.cli" }}
ha: |
{{ .Files.Get "scripts/ha.cli" }}
datasource: |
{{ .Files.Get "scripts/datasource.cli" }}
# Custom CLI script
custom: |
## Custom startup scripts to run before Keycloak starts up
startupScripts: {}
# mystartup.sh: |
# #!/bin/sh
#
# echo 'Hello from my custom startup script!'
## Add additional volumes and mounts, e. g. for custom themes
extraVolumes: |
extraVolumeMounts: |
## Add additional ports, eg. for custom admin console
extraPorts: |
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## Extra annotations to be added to statefulset
statefulsetAnnotations: {}
service:
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
labels: {}
# key: value
## ServiceType
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
type: ClusterIP
## Optional static port assignment for service type NodePort.
# nodePort: 30000
httpPort: 80
httpNodePort: ""
httpsPort: 8443
httpsNodePort: ""
# Optional: jGroups port for high availability clustering
jgroupsPort: 7600
## Add additional ports, eg. for custom admin console
extraPorts: |
## Ingress configuration.
## ref: https://kubernetes.io/docs/user-guide/ingress/
ingress:
enabled: true
path: /
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# ingress.kubernetes.io/affinity: cookie
labels: {}
# key: value
## List of hosts for the ingress
hosts:
- myhost
## TLS configuration
tls:
- hosts:
- myhost
secretName: myhost-tls
## OpenShift route configuration.
## ref: https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html
route:
enabled: false
path: /
annotations: {}
# kubernetes.io/tls-acme: "true"
# haproxy.router.openshift.io/disable_cookies: "true"
# haproxy.router.openshift.io/balance: roundrobin
labels: {}
# key: value
# Host name for the route
host:
# TLS configuration
tls:
enabled: true
insecureEdgeTerminationPolicy: Redirect
termination: edge
## Persistence configuration
persistence:
# If true, the Postgres chart is deployed
deployPostgres: true
# The database vendor. Can be either "postgres", "mysql", "mariadb", or "h2"
dbVendor: postgres
## The following values only apply if "deployPostgres" is set to "false"
dbName: keycloak
dbHost: mykeycloak
dbPort: 5432
## Database Credentials are loaded from a Secret residing in the same Namespace as keycloak.
## The Chart can read credentials from an existing Secret OR it can provision its own Secret.
## Specify existing Secret
# If set, specifies the Name of an existing Secret to read db credentials from.
existingSecret: ""
existingSecretPasswordKey: "" # read keycloak db password from existingSecret under this Key
existingSecretUsernameKey: "" # read keycloak db user from existingSecret under this Key
## Provision new Secret
# Only used if existingSecret is not specified. In this case a new secret is created
# populated by the variables below.
dbUser: keycloak
dbPassword: "sdflkjlkjlkjlkjsdfgsdfgsdfghddftghlikujoiuoiuoiusdfg"
postgresql:
### PostgreSQL User to create.
##
postgresqlUsername: keycloak
## PostgreSQL Password for the new user.
## If not set, a random 10 characters password will be used.
##
postgresqlPassword: "sdflkjlkjlkjlkjsdfgsdfgsdfghddftghlikujoiuoiuoiusdfg"
## PostgreSQL Database to create.
##
postgresqlDatabase: keycloak
## Persistent Volume Storage configuration.
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
##
persistence:
## Enable PostgreSQL persistence using Persistent Volume Claims.
##
enabled: true
test:
enabled: false
image:
repository: unguiculus/docker-python3-phantomjs-selenium
tag: v1
pullPolicy: IfNotPresent
securityContext:
fsGroup: 1000
containerSecurityContext:
runAsUser: 1000
runAsNonRoot: true
prometheus:
operator:
## Are you using Prometheus Operator?
enabled: false
serviceMonitor:
## Optionally set a target namespace in which to deploy serviceMonitor
namespace: ""
## Additional labels to add to the ServiceMonitor so it is picked up by the operator.
## If using the [Helm Chart](https://github.com/helm/charts/tree/master/stable/prometheus-operator) this is the name of the Helm release.
selector:
release: prometheus
## Interval at which Prometheus scrapes metrics
interval: 10s
## Timeout at which Prometheus timeouts scrape run
scrapeTimeout: 10s
## The path to scrape
path: /auth/realms/master/metrics
prometheusRules:
## Add Prometheus Rules?
enabled: false
## Additional labels to add to the PrometheusRule so it is picked up by the operator.
## If using the [Helm Chart](https://github.com/helm/charts/tree/master/stable/prometheus-operator) this is the name of the Helm release and 'app: prometheus-operator'
selector:
app: prometheus-operator
release: prometheus
## Some example rules.
rules: {}
# - alert: keycloak-IngressHigh5xxRate
# annotations:
# message: The percentage of 5xx errors for keycloak over the last 5 minutes is over 1%.
# expr: (sum(rate(nginx_ingress_controller_response_duration_seconds_count{exported_namespace="mynamespace",ingress="mynamespace-keycloak",status=~"5[0-9]{2}"}[1m]))/sum(rate(nginx_ingress_controller_response_duration_seconds_count{exported_namespace="mynamespace",ingress="mynamespace-keycloak"}[1m])))*100 > 1
# for: 5m
# labels:
# severity: warning
# - alert: keycloak-IngressHigh5xxRate
# annotations:
# message: The percentage of 5xx errors for keycloak over the last 5 minutes is over 5%.
# expr: (sum(rate(nginx_ingress_controller_response_duration_seconds_count{exported_namespace="mynamespace",ingress="mynamespace-keycloak",status=~"5[0-9]{2}"}[1m]))/sum(rate(nginx_ingress_controller_response_duration_seconds_count{exported_namespace="mynamespace",ingress="mynamespace-keycloak"}[1m])))*100 > 5
# for: 5m
# labels:
# severity: critical
I too have the same issue as @oliverkane
I needed to run a temp keycloak for testing, and i thought i would do it in another namespace but got the same org.postgresql.util.PSQLException: FATAL: password authentication failed for user "keycloak"
error.
chart version: 11.5 keycloak: 8.0.0
In addition when running the same helm chart targeting the keycloak
namespace, it works fine.
I had the same problem. In my case it turned out that the postgres init scripts were slower than expected in my environment, so the readiness or liveness probe killed the pod before the user initialization was complete. When the pod came back, the init scripts assumed that the initialization was ready because the data directory was populated.
For what it's worth, I've been using ORY's stack in development. Keycloak has been wonderful in production, because of all the features it offers, but for development it's just far too slow and hungry in terms of resources (2 gigs to start up?! and like a minute on modest hardware). If you don't mind building out some basic UI or using their examples with a bit of forking, they're cloud native and I've been very happy with them.
This issue has been marked as stale because it has been open for 30 days with no activity. It will be automatically closed in 10 days if no further activity occurs.
Same issue here, I can't manage to fix it
Hi, i deployed the current keycloak helm chart with postgres support, but keycloak doesn't come up because it can't connect to postgres -> "Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "keycloak""
helm install --name keycloak --namespace keycloak codecentric/keycloak -f values-postgres.yaml