Closed bzolivereckle closed 4 months ago
@bzolivereckle Hi, I met the same issue, but I found that there are some breaking changes between KC24 and KC25:
Management port for metrics and health endpoints The /health and /metrics endpoints are accessible on the management port 9000, which is turned on by default. That means these endpoints are no longer exposed to the standard Keycloak ports 8080 and 8443.
In order to reflect the old behavior, use the property --legacy-observability-interface=true, which will not expose these endpoints on the management port. However, this property is deprecated and will be removed in future releases, so it is recommended not to use it.
The management interface uses a different HTTP server than the default Keycloak HTTP server, and it is possible to configure them separately. Beware, if no values are supplied for the management interface properties, they are inherited from the default Keycloak HTTP server. Upgrading Guide
So, I tried to turn off three probes in the values, and the pod ran normally, but this is not the correct solution, I think the healthy check part in the helm chart needs to be updated.
I ran into the same issue. Instead of turning off the probes it's also possible to define the new management port as an extraPort
in your values.yaml
like this:
extraPorts:
- name: management
containerPort: 9000
protocol: TCP
Afterwards, you can adjust the probes to use this new management port.
I ran into the same issue. Instead of turning off the probes it's also possible to define the new management port as an
extraPort
in yourvalues.yaml
like this:extraPorts: - name: management containerPort: 9000 protocol: TCP
Afterwards, you can adjust the probes to use this new management port.
good idea! I will have a try
I ran into the same issue. Instead of turning off the probes it's also possible to define the new management port as an
extraPort
in yourvalues.yaml
like this:extraPorts: - name: management containerPort: 9000 protocol: TCP
Afterwards, you can adjust the probes to use this new management port.
@bonnQvantum Hi, I found two extraPorts properties in the values.yaml, which one is true?
# Add additional volumes mounts, e. g. for custom themes
extraVolumeMounts: |
- name: newrelic-volume
mountPath: /opt/newrelic-agent/
- name: keycloakx-quarkus-properties
mountPath: /opt/keycloak/conf/quarkus.properties
subPath: quarkus.properties
# Add additional ports, e. g. for admin console or exposing JGroups ports
extraPorts: []
service:
# Annotations for HTTP service
annotations: {}
# Additional labels for headless and HTTP Services
labels: {}
# key: value
# The Service type
type: ClusterIP
# Optional IP for the load balancer. Used for services of type LoadBalancer only
loadBalancerIP: ""
# The http Service port
httpPort: 80
# The HTTP Service node port if type is NodePort
httpNodePort: null
# The HTTPS Service port
httpsPort: 8443
# The HTTPS Service node port if type is NodePort
httpsNodePort: null
# Additional Service ports, e. g. for custom admin console
extraPorts: []
Should be the first one at top-level. The second one will open the port in the service, but for the probes it's not necessary to expose them there.
@bonnQvantum Ok, thanks for your information And for probe, should I write like this:
# Startup probe configuration
startupProbe:
enabled: true
properties: |
httpGet:
path: '{{ tpl .Values.http.relativePath $ | trimSuffix "/" }}/health'
port: management
initialDelaySeconds: 210
timeoutSeconds: 1
failureThreshold: 60
periodSeconds: 5
I'm not sure whether the path will work with the prefix, that could depend on the configuration. It's probably best to try it in a dev environment. If it doesn't work you can use port-forward and try a few different urls to find the correct one.
thx for the info with the probes and the port but on my side the problem is something differently
i deactivated the probes but the pod still fails without any error message I just get the k8s message: Back-off restarting failed container keycloak in pod
btw i have that modifications on the command and extraEnv in Place
## Overrides the default entrypoint of the Keycloak container
command:
- "/opt/keycloak/bin/kc.sh"
- "start"
- "--http-enabled=true"
- "--http-port=8080"
- "--hostname-strict=false"
- "--hostname-strict-https=false"
# Additional environment variables for Keycloak
extraEnv: |
- name: JAVA_OPTS_APPEND
value: >-
-Djgroups.dns.query={{ include "keycloak.fullname" . }}-headless
- name: KC_LOG_LEVEL
value: DEBUG
- name: KC_DB_URL_PROPERTIES
value: "?sslmode=verify-ca&sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory"
btw i have that modifications on the command and extraEnv in Place
## Overrides the default entrypoint of the Keycloak container command: - "/opt/keycloak/bin/kc.sh" - "start" - "--http-enabled=true" - "--http-port=8080" - "--hostname-strict=false" - "--hostname-strict-https=false" # Additional environment variables for Keycloak extraEnv: | - name: JAVA_OPTS_APPEND value: >- -Djgroups.dns.query={{ include "keycloak.fullname" . }}-headless - name: KC_LOG_LEVEL value: DEBUG - name: KC_DB_URL_PROPERTIES value: "?sslmode=verify-ca&sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory"
I've chosen a temporary solution, add one environmental property:
extraEnv: |
- name: KC_LEGACY_OBSERVABILITY_INTERFACE
value: "true"
referenced by : Configuring the Management Interface
nope doesn't work eighter .. even with the legacy stuff it crashes without any message :-(
I've just started using this Chart from scratch and want to point out that e.g. the options --hostname-strict-https
from your command was removed in version 25.
See the upgrade docs for further details: https://www.keycloak.org/docs/25.0.0/upgrading/#new-hostname-options
Maybe it helps, although I got an error message in the pod's logs telling me exactly this. So it's possible that the root cause is different in your case.
I've just started using this Chart from scratch and want to point out that e.g. the options
--hostname-strict-https
from your command was removed in version 25.
thx for pointing that out, i saw this already and tried different options but none of them had been worked out for me :-(
I managed to update to 25.0.0 (from 22.0.1) with this chart version, thanks to @bonnQvantum suggestion. the thing that worked for me was defined as:
image:
tag: "25.0.0"
extraPorts:
- name: management
containerPort: 9000
protocol: TCP
livenessProbe: |
httpGet:
path: '{{ tpl .Values.http.relativePath $ | trimSuffix "/" }}/health/live'
port: management
initialDelaySeconds: 0
timeoutSeconds: 5
readinessProbe: |
httpGet:
path: '{{ tpl .Values.http.relativePath $ | trimSuffix "/" }}/health/ready'
port: management
initialDelaySeconds: 10
timeoutSeconds: 1
# Startup probe configuration
startupProbe: |
httpGet:
path: '{{ tpl .Values.http.relativePath $ | trimSuffix "/" }}/health'
port: management
initialDelaySeconds: 15
timeoutSeconds: 1
failureThreshold: 60
periodSeconds: 5
i finally made it working. Besides the changes @asoldo11 mentioned. i need to change from
## Overrides the default entrypoint of the Keycloak container
command:
- "/opt/keycloak/bin/kc.sh"
- "start"
- "--http-enabled=true"
- "--http-port=8080"
- "--hostname-strict=false"
- "--hostname-strict-https=false"
to
## Overrides the default entrypoint of the Keycloak container
command:
- "/opt/keycloak/bin/kc.sh"
- "start"
- "--http-enabled=true"
- "--http-port=8080"
- "--hostname-strict=false"
- "--hostname-backchannel-dynamic=false"
We also run into some troubles with the nginx ingress because of "upstream sent too big header" That one helped us https://andrewlock.net/fixing-nginx-upstream-sent-too-big-header-error-when-running-an-ingress-controller-in-kubernetes/
We use the keycloakx helm chart for quite a while and had no issues so far. Today i tried to update to 25.0.0 by just replacing the value for the image.tag.
The helm chart was processed fine but the keycloak pod was not starting. Without any error message it pod was restarting all the time.
Rolling back to 24.0.3 helps to get the keycloak pod back but not with version 25.
Steps to reproduce