bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.85k stars 9.14k forks source link

[bitnami/keycloak] Keycloak with KeycloakConfigCli -- everything broken at max #18828

Closed busyboy77 closed 12 months ago

busyboy77 commented 1 year ago

Name and Version

bitnamicharts/keycloak:16.0.5

What architecture are you using?

amd64

What steps will reproduce the bug?

Executed this command to install the helm chart.

helm install demo-keycloak oci://registry-1.docker.io/bitnamicharts/keycloak --values=./values.yaml

result::

root@devops61:~/keycloak/keycloak# helm install demo-keycloak oci://registry-1.docker.io/bitnamicharts/keycloak --values=./values.yaml
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/rancher/rke2/rke2.yaml
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/rancher/rke2/rke2.yaml
Pulled: registry-1.docker.io/bitnamicharts/keycloak:16.0.5
Digest: sha256:edb093ba06978b29d7fcb78e6f14389b707a1eb2224af1e91c1b8b5d4b8d3212
Error: INSTALLATION FAILED: failed post-install: 1 error occurred:
        * timed out waiting for the condition

final result:

root@devops61:~/keycloak/keycloak# k get pods
NAME                                              READY   STATUS    RESTARTS      AGE
common.names.fullname-keycloak-config-cli-dzv9h   0/1     Error     0             37m
common.names.fullname-keycloak-config-cli-rfr4g   0/1     Error     0             34m
demo-keycloak-0                                   0/1     Running   1 (31m ago)   37m
demo-keycloak-postgresql-0                        1/1     Running   0             37m

Issues:

1: the name of the KeycloakconfigCli Pod is not interpolated by the templates. 2: Keyclok never stays up, container is alway restarting

Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       38m                    default-scheduler  Successfully assigned default/demo-keycloak-0 to devops66.ef.com
  Normal   AddedInterface  38m                    multus             Add eth0 [10.42.1.50/32] from k8s-pod-network
  Normal   Pulled          38m                    kubelet            Container image "docker.io/bitnami/keycloak:22.0.1-debian-11-r20" already present on machine
  Normal   Created         38m                    kubelet            Created container keycloak
  Normal   Started         38m                    kubelet            Started container keycloak
  Warning  Unhealthy       35m (x22 over 37m)     kubelet            Readiness probe failed: Get "http://10.42.1.50:8080/iauthrealms/master": dial tcp 10.42.1.50:8080: connect: connection refused
  Warning  Unhealthy       33m                    kubelet            Liveness probe failed: Get "http://10.42.1.50:8080/iauth": dial tcp 10.42.1.50:8080: connect: connection refused
  Warning  Unhealthy       3m28s (x196 over 29m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404

Are you using any custom parameters or values?

my values.yaml ( changes only )

auth:
  adminUser: admin
  adminPassword: "admin"
httpRelativePath: "/iauth"

extraEnvVars:
    - name:  KEYCLOAK_ADMIN_URL
      value: https://devops67.ef.com/iauth/
    - name:  KEYCLOAK_EXTRA_ARGS
      value: -Dkeycloak.frontendUrl=https://devops67.ef.com/auth/  -Dkeycloak.profile.feature.upload_scripts=enabled
    - name:  KEYCLOAK_FRONTEND_URL
      value: https://devops67.ef.com/iauth/
    - name:  KEYCLOAK_LOGLEVEL
      value: DEBUG
    - name:  KEYCLOAK_PASSWORD
      value: admin
    - name:  KEYCLOAK_PROXY_ADDRESS_FORWARDING
      value: "true"
    - name:  KEYCLOAK_USER
      value: admin
    - name:  NODE_ENV
      value: development

ingress:
   enabled: false
   pathType: Prefix
   hostname: devops67.ef.com
   annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
      nginx.org/server-snippets: |
         location / {
           proxy_set_header X-Forwarded-For $host;
           proxy_set_header X-Forwarded-Proto $scheme;
         }
   tls: true
   selfSigned: true

metrics:
  enabled: false
  serviceMonitor:
    enabled: false

keycloakConfigCli:
  enabled: true
  existingConfigmap: "ef-keycloak-realm"

What is the expected behavior?

Keycloak with KeycloakConfigCli should bootstrap the realm. Its doesn't Keycloak itself never stays up.

Please note: the same environment, Keycloak version 19.x stays without any issues.

What do you see instead?

root@devops61:~/keycloak/keycloak# k get pods
NAME                                              READY   STATUS    RESTARTS      AGE
common.names.fullname-keycloak-config-cli-dzv9h   0/1     Error     0             40m
common.names.fullname-keycloak-config-cli-rfr4g   0/1     Error     0             37m
demo-keycloak-0                                   0/1     Running   1 (35m ago)   40m
demo-keycloak-postgresql-0                        1/1     Running   0             40m
root@devops61:~/keycloak/keycloak#

And

Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       38m                    default-scheduler  Successfully assigned default/demo-keycloak-0 to devops66.ef.com
  Normal   AddedInterface  38m                    multus             Add eth0 [10.42.1.50/32] from k8s-pod-network
  Normal   Pulled          38m                    kubelet            Container image "docker.io/bitnami/keycloak:22.0.1-debian-11-r20" already present on machine
  Normal   Created         38m                    kubelet            Created container keycloak
  Normal   Started         38m                    kubelet            Started container keycloak
  Warning  Unhealthy       35m (x22 over 37m)     kubelet            Readiness probe failed: Get "http://10.42.1.50:8080/iauthrealms/master": dial tcp 10.42.1.50:8080: connect: connection refused
  Warning  Unhealthy       33m                    kubelet            Liveness probe failed: Get "http://10.42.1.50:8080/iauth": dial tcp 10.42.1.50:8080: connect: connection refused
  Warning  Unhealthy       3m28s (x196 over 29m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404

Additional information

Its RKE2 based cluster running.

root@devops61:~/keycloak/keycloak# k get nodes -o wide
NAME              STATUS   ROLES                       AGE     VERSION           INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
devops61.ef.com   Ready    control-plane,etcd,master   3d16h   v1.25.12+rke2r1   10.192.168.61   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
devops62.ef.com   Ready    control-plane,etcd,master   3d16h   v1.25.12+rke2r1   10.192.168.62   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
devops63.ef.com   Ready    control-plane,etcd,master   3d16h   v1.25.12+rke2r1   10.192.168.63   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
devops64.ef.com   Ready    <none>                      3d16h   v1.25.12+rke2r1   10.192.168.64   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
devops65.ef.com   Ready    <none>                      3d16h   v1.25.12+rke2r1   10.192.168.65   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
devops66.ef.com   Ready    <none>                      3d16h   v1.25.12+rke2r1   10.192.168.66   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
devops68.ef.com   Ready    <none>                      3d16h   v1.25.12+rke2r1   10.192.168.68   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
devops69.ef.com   Ready    <none>                      3d16h   v1.25.12+rke2r1   10.192.168.69   <none>        Ubuntu 22.04.3 LTS   5.15.0-79-generic   containerd://1.7.1-k3s1
root@devops61:~/keycloak/keycloak#
juan131 commented 1 year ago

Hi @busyboy77

Could you please share the logs of the demo-keycloak-0 pod's container? It seems it was restarted and it's unable to reach the "ready" status.

kubectl logs demo-keycloak-0 --previous

Use --previous flag so we obtain the logs from the container that was restarted before reaching readiness status.

busyboy77 commented 1 year ago

root@devops61:~# kubectl logs demo-keycloak-0 --previous
keycloak 10:33:23.09
keycloak 10:33:23.16 Welcome to the Bitnami keycloak container
keycloak 10:33:23.16 Subscribe to project updates by watching https://github.com/bitnami/containers
keycloak 10:33:23.18 Submit issues and feature requests at https://github.com/bitnami/containers/issues
keycloak 10:33:23.22
keycloak 10:33:23.23 INFO  ==> ** Starting keycloak setup **
keycloak 10:33:23.28 INFO  ==> Validating settings in KEYCLOAK_* env vars...
/opt/bitnami/scripts/libvalidations.sh: line 245: return: : numeric argument required
keycloak 10:33:23.56 INFO  ==> Trying to connect to PostgreSQL server demo-keycloak-postgresql...
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
keycloak 10:35:54.06 ERROR ==> Unable to connect to host demo-keycloak-postgresql
root@devops61:~#

And

root@devops61:~# kubectl describe pod demo-keycloak-0
Name:             demo-keycloak-0
Namespace:        default
Priority:         0
Service Account:  demo-keycloak
Node:             devops66.ef.com/10.192.168.66
Start Time:       Thu, 24 Aug 2023 13:27:49 +0500
Labels:           app.kubernetes.io/component=keycloak
                  app.kubernetes.io/instance=demo-keycloak
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=keycloak
                  controller-revision-hash=demo-keycloak-5c49768699
                  helm.sh/chart=keycloak-16.0.5
                  statefulset.kubernetes.io/pod-name=demo-keycloak-0
Annotations:      checksum/configmap-env-vars: efb719802038afe1aea92ff5a77b793497d92933bc372d2759a61b50b176f529
                  checksum/secrets: bd1b6b1c75a880a3c13b31bd05fea312f9cd78b5a054227885b8eb3eecc796d7
                  cni.projectcalico.org/containerID: 7a2b49364c11c5df9bd5cbf7e33993a83e080fe8386c639ad2d916def7107438
                  cni.projectcalico.org/podIP: 10.42.1.54/32
                  cni.projectcalico.org/podIPs: 10.42.1.54/32
                  k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "multus-cni-network",
                        "ips": [
                            "10.42.1.54"
                        ],
                        "default": true,
                        "dns": {}
                    }]
Status:           Running
IP:               10.42.1.54
IPs:
  IP:           10.42.1.54
Controlled By:  StatefulSet/demo-keycloak
Containers:
  keycloak:
    Container ID:   containerd://a45e788672b9f6abb1eebff9e2705cba06233e9c18f0a0871cb433da4b5a182e
    Image:          docker.io/bitnami/keycloak:22.0.1-debian-11-r20
    Image ID:       docker.io/bitnami/keycloak@sha256:736a672fcd01f9e708902dead31395eedf89160969e0d4ef45f292a667c42a3d
    Ports:          8080/TCP, 7800/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 29 Aug 2023 15:36:20 +0500
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 29 Aug 2023 15:33:22 +0500
      Finished:     Tue, 29 Aug 2023 15:35:54 +0500
    Ready:          False
    Restart Count:  4
    Liveness:       http-get http://:http/iauth delay=300s timeout=5s period=1s #success=1 #failure=3
    Readiness:      http-get http://:http/iauthrealms/master delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      demo-keycloak-env-vars  ConfigMap  Optional: false
    Environment:
      KUBERNETES_NAMESPACE:               default (v1:metadata.namespace)
      BITNAMI_DEBUG:                      false
      KEYCLOAK_ADMIN_PASSWORD:            <set to the key 'admin-password' in secret 'demo-keycloak'>       Optional: false
      KEYCLOAK_DATABASE_PASSWORD:         <set to the key 'password' in secret 'demo-keycloak-postgresql'>  Optional: false
      KEYCLOAK_HTTP_RELATIVE_PATH:        /iauth
      KEYCLOAK_ADMIN_URL:                 https://devops67.ef.com/iauth/
      KEYCLOAK_EXTRA_ARGS:                -Dkeycloak.frontendUrl=https://devops67.ef.com/auth/  -Dkeycloak.profile.feature.upload_scripts=enabled
      KEYCLOAK_FRONTEND_URL:              https://devops67.ef.com/iauth/
      KEYCLOAK_LOGLEVEL:                  DEBUG
      KEYCLOAK_PASSWORD:                  admin
      KEYCLOAK_PROXY_ADDRESS_FORWARDING:  true
      KEYCLOAK_USER:                      admin
      NODE_ENV:                           development
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6bcxm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-6bcxm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                      From     Message
  ----     ------          ----                     ----     -------
  Warning  Unhealthy       4d16h (x4401 over 5d2h)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 404
  Normal   SandboxChanged  8m17s                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   AddedInterface  7m33s                    multus   Add eth0 [10.42.1.54/32] from k8s-pod-network
  Normal   AddedInterface  7m33s                    multus   Add eth0 [10.42.1.54/32] from multus-cni-network
  Warning  BackOff         4m44s (x2 over 4m45s)    kubelet  Back-off restarting failed container
  Normal   Pulled          4m33s (x2 over 7m27s)    kubelet  Container image "docker.io/bitnami/keycloak:22.0.1-debian-11-r20" already present on machine
  Normal   Created         4m33s (x2 over 7m26s)    kubelet  Created container keycloak
  Normal   Started         4m30s (x2 over 7m23s)    kubelet  Started container keycloak
  Warning  Unhealthy       3m14s (x22 over 6m47s)   kubelet  Readiness probe failed: Get "http://10.42.1.54:8080/iauthrealms/master": dial tcp 10.42.1.54:8080: connect: connection refused
busyboy77 commented 1 year ago

did someone check the name of the keycloak-config-cli pod, its template is not rendering properly.

root@devops61:~# k get pods
NAME                                              READY   STATUS    RESTARTS        AGE
common.names.fullname-keycloak-config-cli-dzv9h   0/1     Error     0               5d2h
common.names.fullname-keycloak-config-cli-rfr4g   0/1     Error     0               5d2h
demo-keycloak-0                                   0/1     Running   4 (59s ago)     5d2h
demo-keycloak-postgresql-0                        1/1     Running   1 (7m29s ago)   5d2h
juan131 commented 1 year ago

Hi!!

It seems the issue is that Keycloak cannot connect to PostgreSQL, see:

keycloak 10:33:23.56 INFO  ==> Trying to connect to PostgreSQL server demo-keycloak-postgresql...
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
keycloak 10:35:54.06 ERROR ==> Unable to connect to host demo-keycloak-postgresql

Did you remove existing PVC(s) in your cluster from previous deployments? You might have some PostgreSQL data that's not compatible with the credentials created for PostgreSQL in the new installation. Find more information about this kind of issues in the link below:

busyboy77 commented 1 year ago

I have been cleaning up and removing everything ( helm deployment and the PVC ) , there is nothing in this deployment as of now, its just demo we are preparing for upgrade to latest revision.

The default installation with default values doesn't work either. I'm just working on the same lines , but no success.

busyboy77 commented 1 year ago

Just started another deployment without KeycloakConfigCli this time,

got below logs ( see the pods logs saying something like /opt/bitnami/scripts/libvalidations.sh: line 245: return: : numeric argument required )

root@devops61:~/keycloak/keycloak# k logs -f demo-keycloak-0
keycloak 12:51:47.67
keycloak 12:51:47.68 Welcome to the Bitnami keycloak container
keycloak 12:51:47.71 Subscribe to project updates by watching https://github.com/bitnami/containers
keycloak 12:51:47.72 Submit issues and feature requests at https://github.com/bitnami/containers/issues
keycloak 12:51:47.73
keycloak 12:51:47.74 INFO  ==> ** Starting keycloak setup **
keycloak 12:51:47.78 INFO  ==> Validating settings in KEYCLOAK_* env vars...
/opt/bitnami/scripts/libvalidations.sh: line 245: return: : numeric argument required
keycloak 12:51:47.92 INFO  ==> Trying to connect to PostgreSQL server demo-keycloak-postgresql...
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
keycloak 12:52:18.62 INFO  ==> Found PostgreSQL server listening at demo-keycloak-postgresql:5432
keycloak 12:52:18.68 INFO  ==> Configuring database settings
keycloak 12:52:18.90 INFO  ==> Enabling statistics
keycloak 12:52:18.95 INFO  ==> Enabling health endpoints
keycloak 12:52:18.98 INFO  ==> Configuring http settings
keycloak 12:52:19.07 INFO  ==> Configuring hostname settings
keycloak 12:52:19.08 INFO  ==> Configuring cache count
keycloak 12:52:19.10 INFO  ==> Configuring log level
keycloak 12:52:19.12 INFO  ==> Configuring proxy

keycloak 12:52:19.15 INFO  ==> ** keycloak setup finished! **
keycloak 12:52:19.25 INFO  ==> ** Starting keycloak **
Appending additional Java properties to JAVA_OPTS: -Djgroups.dns.query=demo-keycloak-headless.default.svc.cluster.local
Updating the configuration and installing your custom providers, if any. Please wait.
2023-08-29 12:52:36,314 WARN  [org.keycloak.services] (build-29) KC-SERVICES0047: metrics (org.jboss.aerogear.keycloak.metrics.MetricsEndpointFactory) is implementing the internal SPI realm-restapi-extension. This SPI is internal and may change without notice
2023-08-29 12:52:38,735 WARN  [org.keycloak.services] (build-29) KC-SERVICES0047: metrics-listener (org.jboss.aerogear.keycloak.metrics.MetricsEventListenerFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice
2023-08-29 12:52:59,852 INFO  [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 31229ms
2023-08-29 12:53:09,399 INFO  [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: true
2023-08-29 12:53:22,117 WARN  [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-08-29 12:53:27,407 WARN  [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-08-29 12:53:28,139 INFO  [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-08-29 12:53:29,481 INFO  [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-08-29 12:53:29,741 INFO  [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN` with stack `kubernetes`
2023-08-29 12:53:29,746 INFO  [org.jgroups.JChannel] (keycloak-cache-init) local_addr: 211a3c0b-7592-4bf2-8a1e-85ba1cd946f3, name: demo-keycloak-0-21525
2023-08-29 12:53:29,790 INFO  [org.jgroups.protocols.FD_SOCK2] (keycloak-cache-init) server listening on *.57800
2023-08-29 12:53:31,817 INFO  [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) demo-keycloak-0-21525: no members discovered after 2014 ms: creating cluster as coordinator
2023-08-29 12:53:31,855 INFO  [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [demo-keycloak-0-21525|0] (1) [demo-keycloak-0-21525]
2023-08-29 12:53:32,090 INFO  [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `demo-keycloak-0-21525`, physical addresses are `[10.42.3.49:7800]`
2023-08-29 12:53:34,256 INFO  [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: demo-keycloak-0-21525, Site name: null
2023-08-29 12:53:40,620 INFO  [org.keycloak.quarkus.runtime.storage.legacy.liquibase.QuarkusJpaUpdaterProvider] (main) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
2023-08-29 12:54:48,076 INFO  [org.keycloak.services] (main) KC-SERVICES0050: Initializing master realm
2023-08-29 12:55:01,546 INFO  [io.quarkus] (main) Keycloak 22.0.1 on JVM (powered by Quarkus 3.2.0.Final) started in 121.080s. Listening on: http://0.0.0.0:8080
2023-08-29 12:55:01,548 INFO  [io.quarkus] (main) Profile dev activated.
2023-08-29 12:55:01,548 INFO  [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, micrometer, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, vertx]
2023-08-29 12:55:02,427 INFO  [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master'
2023-08-29 12:55:02,528 WARN  [org.keycloak.quarkus.runtime.KeycloakMain] (main) Running the server in development mode. DO NOT use this configuration in production.

the values file is given below:

global:
  imageRegistry: ""
  imagePullSecrets: []
  storageClass: ""
kubeVersion: ""
nameOverride: ""
fullnameOverride: ""
namespaceOverride: ""
commonLabels: {}
enableServiceLinks: true
commonAnnotations: {}
dnsPolicy: ""
dnsConfig: {}
clusterDomain: cluster.local
extraDeploy: []
diagnosticMode:
  enabled: false
  command:
    - sleep
  args:
    - infinity
image:
  registry: docker.io
  repository: bitnami/keycloak
  tag: 22.0.1-debian-11-r20
  digest: ""
  pullPolicy: IfNotPresent
  pullSecrets: []
  debug: false
auth:
  adminUser: admin
  adminPassword: "admin"
  existingSecret: ""
  passwordSecretKey: ""
tls:
  enabled: false
  autoGenerated: false
  existingSecret: ""
  usePem: false
  truststoreFilename: "keycloak.truststore.jks"
  keystoreFilename: "keycloak.keystore.jks"
  keystorePassword: ""
  truststorePassword: ""
  passwordsSecret: ""
spi:
  existingSecret: ""
  truststorePassword: ""
  truststoreFilename: "keycloak-spi.truststore.jks"
  passwordsSecret: ""
  hostnameVerificationPolicy: ""
production: false
proxy: passthrough
httpRelativePath: "/iauth"
configuration: ""
existingConfigmap: ""
extraStartupArgs: ""
initdbScripts: {}
initdbScriptsConfigMap: ""
command: []
args: []
extraEnvVars:
   - name:  KEYCLOAK_ADMIN_URL
     value: https://devops67.ef.com/iauth/
   - name:  KEYCLOAK_EXTRA_ARGS
     value: -Dkeycloak.frontendUrl=https://devops67.ef.com/iauth/  -Dkeycloak.profile.feature.upload_scripts=enabled
   - name:  KEYCLOAK_FRONTEND_URL
     value: https://devops67.ef.com/iauth/
   - name:  KEYCLOAK_LOGLEVEL
     value: DEBUG
   - name:  KEYCLOAK_PASSWORD
     value: admin
   - name:  KEYCLOAK_PROXY_ADDRESS_FORWARDING
     value: "true"
   - name:  KEYCLOAK_USER
     value: admin
   - name:  NODE_ENV
     value: development
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
replicaCount: 1
containerPorts:
  http: 8080
  https: 8443
  infinispan: 7800
extraContainerPorts: []
podSecurityContext:
  enabled: true
  fsGroup: 1001
containerSecurityContext:
  enabled: true
  runAsUser: 1001
  runAsNonRoot: true
resources:
  limits: {}
  requests: {}
livenessProbe:
  enabled: true
  initialDelaySeconds: 300
  periodSeconds: 1
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 1
  failureThreshold: 3
  successThreshold: 1
startupProbe:
  enabled: false
  initialDelaySeconds: 30
  periodSeconds: 5
  timeoutSeconds: 1
  failureThreshold: 60
  successThreshold: 1
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
lifecycleHooks: {}
hostAliases: []
podLabels: {}
podAnnotations: {}
podAffinityPreset: ""
podAntiAffinityPreset: soft
nodeAffinityPreset:
  type: ""
  key: ""
  values: []
affinity: {}
nodeSelector: {}
tolerations: []
topologySpreadConstraints: []
podManagementPolicy: Parallel
priorityClassName: ""
schedulerName: ""
terminationGracePeriodSeconds: ""
updateStrategy:
  type: RollingUpdate
  rollingUpdate: {}
extraVolumes: []
extraVolumeMounts: []
initContainers: []
sidecars: []
service:
  type: ClusterIP
  http:
    enabled: true
  ports:
    http: 80
    https: 443
  nodePorts:
    http: ""
    https: ""
  sessionAffinity: None
  sessionAffinityConfig: {}
  clusterIP: ""
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  externalTrafficPolicy: Cluster
  annotations: {}
  extraPorts: []
  extraHeadlessPorts: []
  headless:
    annotations: {}
    extraPorts: []
ingress:
  enabled: true
  ingressClassName: ""
  pathType: Prefix
  apiVersion: ""
  hostname: devops67.ef.com
  path: "{{ .Values.httpRelativePath }}"
  servicePort: http
  annotations:
     kubernetes.io/ingress.class: nginx
     nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
     nginx.org/server-snippets: |
       location / {
         proxy_set_header X-Forwarded-For $host;
         proxy_set_header X-Forwarded-Proto $scheme;
       }
  labels: {}
  tls: true
  selfSigned: true
  extraHosts: []
  extraPaths: []
  extraTls: []
  secrets: []
  extraRules: []
networkPolicy:
  enabled: false
  allowExternal: true
  additionalRules: {}
serviceAccount:
  create: true
  name: ""
  automountServiceAccountToken: true
  annotations: {}
  extraLabels: {}
rbac:
  create: false
  rules: []
pdb:
  create: false
  minAvailable: 1
  maxUnavailable: ""
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 11
  targetCPU: ""
  targetMemory: ""
metrics:
  enabled: false
  service:
    ports:
      http: 8080
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "{{ .Values.metrics.service.ports.http }}"
  serviceMonitor:
    enabled: false
    port: http
    endpoints:
      - path: '{{ include "keycloak.httpPath" . }}metrics'
      - path: '{{ include "keycloak.httpPath" . }}realms/master/metrics'
    path: ""
    namespace: ""
    interval: 30s
    scrapeTimeout: ""
    labels: {}
    selector: {}
    relabelings: []
    metricRelabelings: []
    honorLabels: false
    jobLabel: ""
  prometheusRule:
    enabled: false
    namespace: ""
    labels: {}
    groups: []
keycloakConfigCli:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/keycloak-config-cli
    tag: 5.8.0-debian-11-r22
    digest: ""
    pullPolicy: IfNotPresent
    pullSecrets: []
  annotations:
    helm.sh/hook: "post-install,post-upgrade,post-rollback"
    helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
    helm.sh/hook-weight: "5"
  command: []
  args: []
  hostAliases: []
  resources:
    limits: {}
    requests: {}
  containerSecurityContext:
    enabled: true
    runAsUser: 1001
    runAsNonRoot: true
  podSecurityContext:
    enabled: true
    fsGroup: 1001
  backoffLimit: 1
  podLabels: {}
  podAnnotations: {}
  nodeSelector: {}
  podTolerations: []
  extraEnvVars: []
  extraEnvVarsCM: ""
  extraEnvVarsSecret: ""
  extraVolumes: []
  extraVolumeMounts: []
  initContainers: []
  sidecars: []
  configuration: {}
  existingConfigmap: ""
  cleanupAfterFinished:
    enabled: false
    seconds: 600
postgresql:
  enabled: true
  auth:
    postgresPassword: ""
    username: bn_keycloak
    password: ""
    database: bitnami_keycloak
    existingSecret: ""
  architecture: standalone
externalDatabase:
  host: ""
  port: 5432
  user: bn_keycloak
  database: bitnami_keycloak
  password: ""
  existingSecret: ""
  existingSecretHostKey: ""
  existingSecretPortKey: ""
  existingSecretUserKey: ""
  existingSecretDatabaseKey: ""
  existingSecretPasswordKey: ""
cache:
  enabled: true
  stackName: kubernetes
  stackFile: ""
logging:
  output: default
  level: INFO

and current pods are:

root@devops61:~# k get pods
NAME                         READY   STATUS    RESTARTS   AGE
demo-keycloak-0              0/1     Running   0          8m10s
demo-keycloak-postgresql-0   1/1     Running   0          8m9s
root@devops61:~#
busyboy77 commented 1 year ago

the PVCV is bound and just got re-created from scratch, so no previous deployment residuals

root@devops61:~# k get pvc
NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-demo-keycloak-postgresql-0   Bound    pvc-0aeb1e26-13ef-41f3-b978-a06d61abf0ae   8Gi        RWO            rook-ceph-block   9m37s
root@devops61:~#
busyboy77 commented 1 year ago

I had a slight mistake in httpRelativePath ( was missing the / after /iauth, should be /iauth/ my bad )

now getting this error ( the HELM Chart doesn't automatically generate password for the bn_keycloak user)

2023-08-29 13:08:37,347 WARN  [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator] (JPA Startup Thread) HHH000342: Could not obtain connection to query metadata: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "bn_keycloak"
        at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:693)
        at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:203)
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:258)
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:54)
        at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:263)
        at org.postgresql.Driver.makeConnection(Driver.java:443)
        at org.postgresql.Driver.connect(Driver.java:297)
        at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:681)
        at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:229)
        at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:103)
        at org.postgresql.xa.PGXADataSource.getXAConnection(PGXADataSource.java:49)
        at org.postgresql.xa.PGXADataSource.getXAConnection(PGXADataSource.java:35)
        at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:232)
        at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:536)
        at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:517)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:833)

going to re-deploy with random password for the bn_keycloak user under postgresql subchart.

busyboy77 commented 1 year ago

I have managed to run the Keycloak with below given updates

1: disbled the KeyCLoakconfigCli completely. 2: Added the bn_keycloak user's password to the helm chart.

the stack is working fine now, except the ingress , the adminUI is always loading.

can you please comment on the ingress related info, if there is something can help. I'm using RKE2 nginx based ingress controller

juan131 commented 1 year ago

Hi @busyboy77

see the pods logs saying something like /opt/bitnami/scripts/libvalidations.sh: line 245: return: : numeric argument required

Thanks for reporting it, this sth to be fixed in the keycloak container, see:

I'm glad you were able to solve the installation issues. Could you please open a new issue describing the issues with the ingress configuration and close this one? We should stick to the issue topic to avoid confusion to other issues.

github-actions[bot] commented 1 year ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 12 months ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.