codecentric / helm-charts

A curated set of Helm charts brought to you by codecentric
Apache License 2.0
618 stars 606 forks source link

[keycloak] - admin web gui not accessible - /auth/admin or /admin - gives 404 #542

Closed owit-infra closed 2 years ago

owit-infra commented 2 years ago

I tried various ways of deploying the keycloak chart version 17.0.1 - and when it gets deployed I cannot access the admin page on paths: /admin or /auth/admin

it gives error 404. I tried enabling the console with consoe.enabled: true. that just creates an ingress which targets the HTTP port of the service (just like the original ingress) it doesn't change the behaviour and still gives 404. I also tried to change the additionally created console ingress target port to 9900 - that gives 502 Bad gateway error.

I tried deploying a default helm chart, with no options changed other than to add ingress. Same outcome.

I tried deploying the deault helm chart, but with ALL deaulf values - then I created an ingress resrouces for it manually. Same outcome. error 404.

ALL of these scenarios - KeyCloak does respond on the /auth path just fine. Its only when tryng to login to the console it fails.

I tried to look through the documetnation and can't see any specific settings I need to change to make this to work.

Am i missing somthing obvious here ?

grafjo commented 2 years ago

maybe you can provide your values.yaml - so we can see what you did or didn't

volaralle commented 2 years ago

It works for me:

ingress:
  enabled: true
  servicePort: http
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  labels:
    app.kubernetes.io/instance: keycloak
  rules:
  - host: name.demo.com
    paths:
      - path: /
        pathType: Prefix
    tls:
    - hosts:
      - name.demo.com
      secretName: keycloak-tls-prod
  console:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    rules:
    -
      host: name.demo.com
      paths:
        - path: /auth/admin/
          pathType: Prefix
    tls:
    - hosts:
      - name.demo.com
      secretName: keycloak-tls-prod  

with

extraEnv: |
  - name: KEYCLOAK_USER
    value: admin
  - name: KEYCLOAK_PASSWORD
    value: asdfasdf
  - name: PROXY_ADDRESS_FORWARDING
    value: "true" 
jrivers96 commented 2 years ago

I'm having the same problem with 18.1

We are using a seperate ingress that was proven with a prior version of keycloak. I can curl a token from the keycloak service so I know the server is healthy. I have a gut feeling that the self signed certificate is created incorrectly.

I'm a bit suspect of the below error as well.

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.wildfly.extension.elytron.SSLDefinitions (jar:file:/opt/jboss/keycloak/modules/system/layers/base/org/wildfly/extension/elytron/main/wildfly-elytron-integration-18.0.4.Final.jar!/) to method com.sun.net.ssl.internal.ssl.Provider.isFIPS()
WARNING: Please consider reporting this to the maintainers of org.wildfly.extension.elytron.SSLDefinitions
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
# Optionally override the fully qualified name
fullnameOverride: ""

# Optionally override the name
nameOverride: ""

# The number of replicas to create (has no effect if autoscaling enabled)
replicas: 3

image:
  # The Keycloak image repository
  repository: quay.io/keycloak/keycloak
  # Overrides the Keycloak image tag whose default is the chart appVersion
  tag: ""
  # The Keycloak image pull policy
  pullPolicy: IfNotPresent

# Image pull secrets for the Pod
imagePullSecrets: []
# - name: myRegistrKeySecretName

# Mapping between IPs and hostnames that will be injected as entries in the Pod's hosts files
hostAliases: []
# - ip: "1.2.3.4"
#   hostnames:
#     - "my.host.com"

# Indicates whether information about services should be injected into Pod's environment variables, matching the syntax of Docker links
enableServiceLinks: true

# Pod management policy. One of `Parallel` or `OrderedReady`
podManagementPolicy: Parallel

# Pod restart policy. One of `Always`, `OnFailure`, or `Never`
restartPolicy: Always

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
  # Additional annotations for the ServiceAccount
  annotations: {}
  # Additional labels for the ServiceAccount
  labels: {"component":"keycloak", "tier":"back-end", "customer-facing":"yes", "app-role":"auth"}
  # Image pull secrets that are attached to the ServiceAccount
  imagePullSecrets: []

rbac:
  create: false
  rules: []
  # RBAC rules for KUBE_PING
  #  - apiGroups:
  #      - ""
  #    resources:
  #      - pods
  #    verbs:
  #      - get
  #      - list

# SecurityContext for the entire Pod. Every container running in the Pod will inherit this SecurityContext. This might be relevant when other components of the environment inject additional containers into running Pods (service meshes are the most prominent example for this)
podSecurityContext:
  fsGroup: 1000

# SecurityContext for the Keycloak container
securityContext:
  runAsUser: 1000
  runAsNonRoot: true

# Additional init containers, e. g. for providing custom themes
extraInitContainers: |
  - name: theme-provider
    image: REDACT
    imagePullPolicy: Always
    command:
      - sh
    args:
      - -c
      - |
        echo "Copying theme..."
        cp -R /ndaqtheme/* /theme
        echo "Copying authenticator..."
        cp -R /authdeploy/*.jar  /opt/jboss/keycloak/standalone/deployments
    volumeMounts:
      - name: theme
        mountPath: /theme
      - name: deploy
        mountPath: /opt/jboss/keycloak/standalone/deployments
  - name: metric-spi
    image: REDACT
    imagePullPolicy: Always
    command:
      - sh
    args:
      - -c
      - |
        echo "Copying metrics spi..."
        cp -R /keycloak-metrics/keycloak-metrics-spi-*.jar  /opt/jboss/keycloak/standalone/deployments
    volumeMounts:
      - name: deploy
        mountPath: /opt/jboss/keycloak/standalone/deployments

# When using service meshes which rely on a sidecar, it may be necessary to skip init containers altogether,
# since the sidecar doesn't start until the init containers are done, and the sidecar may be required
# for network access.
# For example, Istio in strict mTLS mode prevents the pgchecker init container from ever completing
skipInitContainers: false

# Additional sidecar containers, e. g. for a database proxy, such as Google's cloudsql-proxy
extraContainers: ""

# Lifecycle hooks for the Keycloak container
lifecycleHooks: |
#  postStart:
#    exec:
#      command:
#        - /bin/sh
#        - -c
#        - ls

# Termination grace period in seconds for Keycloak shutdown. Clusters with a large cache might need to extend this to give Infinispan more time to rebalance
terminationGracePeriodSeconds: 120

# The internal Kubernetes cluster domain
clusterDomain: cluster.local

## Overrides the default entrypoint of the Keycloak container
command: []

## Overrides the default args for the Keycloak container
args: []

# Additional environment variables for Keycloak
extraEnv: |
  - name: KEYCLOAK_WELCOME_THEME
    value: REDACT
  - name: KEYCLOAK_LOGLEVEL
    value: DEBUG
  - name: WILDFLY_LOGLEVEL
    value: INFO
  - name: KEYCLOAK_IMPORT
    value: /realm/pro-realm.json
  - name: PROXY_ADDRESS_FORWARDING
    value: "true"
  - name: JGROUPS_DISCOVERY_PROTOCOL
    value: dns.DNS_PING
  - name: JGROUPS_DISCOVERY_PROPERTIES
    value: 'dns_query={{ include "keycloak.serviceDnsName" . }}'
  - name: CACHE_OWNERS_COUNT
    value: "3"
  - name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
    value: "3"
  - name: KEYCLOAK_STATISTICS
    value: all
  - name: DB_VENDOR
    value: postgres
  - name: DB_ADDR
    value: REDACT
  - name: DB_PORT
    value: "5432"
  - name: DB_DATABASE
    value: keycloak
  - name: DB_USER_FILE
    value: /secrets/db-creds/db-username
  - name: DB_PASSWORD_FILE
    value: /secrets/db-creds/db-pwd
  - name: KEYCLOAK_USER_FILE
    value: /secrets/db-creds/keycloak-admin
  - name: KEYCLOAK_PASSWORD_FILE
    value: /secrets/db-creds/keycloak-password
  - name: JAVA_OPTS
    value: >-
      -server
      -Xms12g
      -Xmx12g
      -XX:+UseG1GC
      -XX:+UseCompressedOops
      -Djava.net.preferIPv4Stack=true
      -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS
      -Djava.awt.headless=true
      -Dwildfly.statistics-enabled=true

# Additional environment variables for Keycloak mapped from Secret or ConfigMap
extraEnvFrom: ""

#  Pod priority class name
priorityClassName: ""

# Pod affinity
affinity: |
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            {{- include "keycloak.selectorLabels" . | nindent 10 }}
          matchExpressions:
            - key: app.kubernetes.io/component
              operator: NotIn
              values:
                - strimzi
        topologyKey: kubernetes.io/hostname
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              {{- include "keycloak.selectorLabels" . | nindent 12 }}
            matchExpressions:
              - key: app.kubernetes.io/component
                operator: NotIn
                values:
                  - test
          topologyKey: topology.kubernetes.io/zone

# Topology spread constraints template
#topologySpreadConstraints: #Kubernetes 1.19 and later

# Node labels for Pod assignment
nodeSelector:
  app: mdic-keycloak

# Node taints to tolerate
tolerations: []

# Additional Pod labels
podLabels: {"app" : "REDACT", "component":"keycloak", "tier":"back-end", "customer-facing":"yes", "app-role":"auth"}

# Additional Pod annotations
podAnnotations: {}

# Liveness probe configuration
livenessProbe: |
  httpGet:
    path: /auth/
    port: http
  initialDelaySeconds: 300
  timeoutSeconds: 5

# Readiness probe configuration
readinessProbe: |
  httpGet:
    path: /auth/realms/master
    port: http
  initialDelaySeconds: 30
  timeoutSeconds: 1

# Startup probe configuration
startupProbe: |
  httpGet:
    path: /auth/
    port: http
  initialDelaySeconds: 30
  timeoutSeconds: 1
  failureThreshold: 60
  periodSeconds: 5

# Pod resource requests and limits
resources:
  requests:
    cpu: "1"
    memory: "20Gi"
  limits:
    cpu: "3"
    memory: "20Gi"

# Startup scripts to run before Keycloak starts up
startupScripts:
  # WildFly CLI script for configuring the node-identifier
  keycloak.cli: |
    {{- .Files.Get "scripts/keycloak.cli" }}

  custom.cli: |
      embed-server --server-config=standalone-ha.xml --std-out=echo
      /socket-binding-group=standard-sockets/socket-binding=https-admin/:add(port=8444)
      /subsystem=undertow/server=default-server/https-listener=https-admin:add(socket-binding=https-admin, security-realm=ApplicationRealm, enable-http2=true)
      /subsystem=undertow/configuration=filter/expression-filter=portAccess:add(,expression="path-prefix('/auth/admin') and not equals(%p, 8444) -> response-code(403)")
      /subsystem=undertow/server=default-server/host=default-host/filter-ref=portAccess:add()
      /socket-binding-group=standard-sockets/socket-binding=log-access/:add(port=8445)
      /subsystem=undertow/configuration=filter/expression-filter=portAccess1:add(,expression="path-prefix('/auth/realms/pro-realm/account/log') and not equals(%p, 8445) -> response-code(403)")
      /subsystem=undertow/server=default-server/host=default-host/filter-ref=portAccess1:add()
      /socket-binding-group=standard-sockets/socket-binding=selfservice-access/:add(port=8446)
      /subsystem=undertow/configuration=filter/expression-filter=portAccess2:add(,expression="path-prefix('/auth/realms/pro-realm/clients-registrations') and not equals(%p, 8446) -> response-code(403)")
      /subsystem=undertow/server=default-server/host=default-host/filter-ref=portAccess2:add()
      /subsystem=keycloak-server/spi=connectionsHttpClient/provider=default:write-attribute(name=properties.max-connection-idle-time-millis,value=30L)
      /subsystem=keycloak-server/spi=connectionsHttpClient/provider=default:write-attribute(name=properties.connection-pool-size,value=16384)
      /subsystem=keycloak-server/spi=connectionsHttpClient/provider=default:write-attribute(name=properties.max-pooled-per-route,value=6000)
      /subsystem=infinispan/cache-container=ejb:write-attribute(name=statistics-enabled,value=true)
      /subsystem=infinispan/cache-container=keycloak:write-attribute(name=statistics-enabled,value=true)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=statistics-enabled,value=true)
      /subsystem=infinispan/cache-container=keycloak/replicated-cache=work/component=expiration/:write-attribute(name=lifespan,value=2592000000)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions/component=expiration/:write-attribute(name=lifespan,value=2592000000)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=clientSessions/component=expiration/:write-attribute(name=lifespan,value=2592000000)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions/component=expiration/:write-attribute(name=lifespan,value=2592000000)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineClientSessions/component=expiration/:write-attribute(name=lifespan,value=2592000000)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions/component=expiration/:write-attribute(name=lifespan,value=2592000000)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures/component=expiration/:write-attribute(name=lifespan,value=2592000000)
      /subsystem=infinispan/cache-container=keycloak/distributed-cache=actionTokens/component=expiration/:write-attribute(name=lifespan,value=2592000000)

      stop-embedded-server

# Add additional volumes, e. g. for custom themes
extraVolumes: |
  - name: realm-secret
    secret:
      secretName: realm-secret
  - name: db-creds
    secret:
      secretName: keycloak-secret
  - name: theme
    emptyDir: {}
  - name: deploy
    emptyDir: {}

# Add additional volumes mounts, e. g. for custom themes
extraVolumeMounts: |
   - name: realm-secret
     mountPath: "/realm/"
     readOnly: true
   - name: db-creds
     mountPath: /secrets/db-creds
     readOnly: true
   - name: deploy
     mountPath: /opt/jboss/keycloak/standalone/deployments

# Add additional ports, e. g. for admin console or exposing JGroups ports
extraPorts:
   - name: https-admin
     protocol: TCP
     containerPort: 8444

# Pod disruption budget
podDisruptionBudget:
  maxUnavailable: 1

# Annotations for the StatefulSet
statefulsetAnnotations: {}

# Additional labels for the StatefulSet
statefulsetLabels: {}

# Configuration for secrets that should be created
secrets: {}
  # mysecret:
  #   type: {}
  #   annotations: {}
  #   labels: {}
  #   stringData: {}
  #   data: {}

service:
  # Annotations for headless and HTTP Services
  annotations: {}
  # Additional labels for headless and HTTP Services
  labels: {"app":"REDACT", "tier":"back-end", "customer-facing":"yes", "app-role":"auth"}
  # key: value
  # The Service type
  type: ClusterIP
  # Optional IP for the load balancer. Used for services of type LoadBalancer only
  loadBalancerIP: ""
  # The http Service port
  httpPort: 80
  # The HTTP Service node port if type is NodePort
  httpNodePort: null
  # The HTTPS Service port
  httpsPort: 8443
  # The HTTPS Service node port if type is NodePort
  httpsNodePort: null
  # The WildFly management Service port
  httpManagementPort: 9990
  # The WildFly management Service node port if type is NodePort
  httpManagementNodePort: null
  # Additional Service ports, e. g. for custom admin console
  extraPorts:
   - name: https-admin
     protocol: TCP
     port: 8444
  # When using Service type LoadBalancer, you can restrict source ranges allowed
  # to connect to the LoadBalancer, e. g. will result in Security Groups
  # (or equivalent) with inbound source ranges allowed to connect
  loadBalancerSourceRanges: []
  # When using Service type LoadBalancer, you can preserve the source IP seen in the container
  # by changing the default (Cluster) to be Local.
  # See https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
  #externalTrafficPolicy: "Cluster"
  # Session affinity
  # See https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace
  sessionAffinity: ""
  # Session affinity config
  sessionAffinityConfig: {}

ingress:
  # If `true`, an Ingress is created
  enabled: false
  # The name of the Ingress Class associated with this ingress
  ingressClassName: ""
  # The Service port targeted by the Ingress
  servicePort: http
  # Ingress annotations
  annotations: {}
    ## Resolve HTTP 502 error using ingress-nginx:
    ## See https://www.ibm.com/support/pages/502-error-ingress-keycloak-response
    # nginx.ingress.kubernetes.io/proxy-buffer-size: 128k

  # Additional Ingress labels
  labels: {}
   # List of rules for the Ingress
  rules:
    -
      # Ingress host
      host: '{{ .Release.Name }}.keycloak.example.com'
      # Paths for the host
      paths:
        - path: /
          pathType: Prefix
  # TLS configuration
  tls:
    - hosts:
        - keycloak.example.com
      secretName: ""

  # ingress for console only (/auth/admin)
  console:
    # If `true`, an Ingress is created for console path only
    enabled: false
    # The name of Ingress Class associated with the console ingress only
    ingressClassName: "foo"
    # Ingress annotations for console ingress only
    # Useful to set nginx.ingress.kubernetes.io/whitelist-source-range particularly
    annotations: {}
    rules:
      -
        # Ingress host
        host: 'REDACT'
        # Paths for the host
        paths:
          - path: /auth/admin/
            pathType: Prefix

## Network policy configuration
networkPolicy:
  # If true, the Network policies are deployed
  enabled: false

  # Additional Network policy labels
  labels: {}

  # Define all other external allowed source
  # See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#networkpolicypeer-v1-networking-k8s-io
  extraFrom: []

route:
  # If `true`, an OpenShift Route is created
  enabled: false
  # Path for the Route
  path: /
  # Route annotations
  annotations: {}
  # Additional Route labels
  labels: {}
  # Host name for the Route
  host: ""
  # TLS configuration
  tls:
    # If `true`, TLS is enabled for the Route
    enabled: true
    # Insecure edge termination policy of the Route. Can be `None`, `Redirect`, or `Allow`
    insecureEdgeTerminationPolicy: Redirect
    # TLS termination of the route. Can be `edge`, `passthrough`, or `reencrypt`
    termination: edge

pgchecker:
  image:
    # Docker image used to check Postgresql readiness at startup
    repository: docker.io/busybox
    # Image tag for the pgchecker image
    tag: 1.32
    # Image pull policy for the pgchecker image
    pullPolicy: IfNotPresent
  # SecurityContext for the pgchecker container
  securityContext:
    allowPrivilegeEscalation: false
    runAsUser: 1000
    runAsGroup: 1000
    runAsNonRoot: true
  # Resource requests and limits for the pgchecker container
  resources:
    requests:
      cpu: "10m"
      memory: "16Mi"
    limits:
      cpu: "10m"
      memory: "16Mi"

postgresql:
  # If `true`, the Postgresql dependency is enabled
  enabled: false
  # PostgreSQL User to create
  postgresqlUsername: keycloak
  # PostgreSQL Password for the new user
  postgresqlPassword: "keycloakpw"
  # PostgreSQL Database to create
  postgresqlDatabase: keycloak
  # Persistent Volume Storage configuration

  # PostgreSQL network policy configuration
  networkPolicy:
    enabled: false

  ## Persistent Volume Storage configuration.
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
  ##
  persistence:
    ## Enable PostgreSQL persistence using Persistent Volume Claims.
    ##
    enabled: false
    persistence.size: 50Gi

serviceMonitor:
  # If `true`, a ServiceMonitor resource for the prometheus-operator is created
  enabled: true
  # Optionally sets a target namespace in which to deploy the ServiceMonitor resource
  namespace: monitoring
  # Optionally sets a namespace for the ServiceMonitor
  namespaceSelector:
    matchNames:
      - keycloak
  # Annotations for the ServiceMonitor
  annotations: {}
  # Additional labels for the ServiceMonitor
  labels: {"app":"REDACT", "component":"keycloak", "tier":"back-end", "customer-facing":"no", "app-role":"auth"}
  # Interval at which Prometheus scrapes metrics
  interval: 10s
  # Timeout for scraping
  scrapeTimeout: 10s
  # The path at which metrics are served
  path: /metrics
  # The Service port at which metrics are served
  port: http-management

extraServiceMonitor:
  # If `true`, a ServiceMonitor resource for the prometheus-operator is created
  enabled: true
  # Optionally sets a target namespace in which to deploy the ServiceMonitor resource
  namespace: monitoring
  # Optionally sets a namespace for the ServiceMonitor
  namespaceSelector:
    matchNames:
      - keycloak
  # Annotations for the ServiceMonitor
  annotations: {}
  # Additional labels for the ServiceMonitor
  labels: {"app":"strimzi", "component":"keycloak", "tier":"back-end", "customer-facing":"no", "app-role":"auth"}
  # Interval at which Prometheus scrapes metrics
  interval: 10s
  # Timeout for scraping
  scrapeTimeout: 10s
  # The path at which metrics are served
  path: /auth/realms/master/metrics
  # The Service port at which metrics are served
  port: http

prometheusRule:
  # If `true`, a PrometheusRule resource for the prometheus-operator is created
  enabled: true
  # Annotations for the PrometheusRule
  annotations: {}
  # Additional labels for the PrometheusRule
  labels: {"app":"REDACT", "role": "alert-rules", "component":"keycloak", "tier":"back-end", "customer-facing":"no", "app-role":"auth"}
  # List of rules for Prometheus
  rules:
   - alert: keycloak-IngressHigh5xxRate
     annotations:
       message: The percentage of 5xx errors for keycloak over the last 5 minutes is over 1%.
     expr: |
       (
         sum(
           rate(
             nginx_ingress_controller_response_duration_seconds_count{exported_namespace="keycloak",ingress="keycloak",status=~"5[0-9]{2}"}[1m]
           )
         )
         /
         sum(
           rate(
             nginx_ingress_controller_response_duration_seconds_count{exported_namespace="keycloak",ingress="keycloak"}[1m]
           )
         )
       ) * 100 > 1
     for: 5m
     labels:
       severity: warning

autoscaling:
  # If `true`, a autoscaling/v2beta2 HorizontalPodAutoscaler resource is created (requires Kubernetes 1.18 or above)
  # Autoscaling seems to be most reliable when using KUBE_PING service discovery (see README for details)
  # This disables the `replicas` field in the StatefulSet
  enabled: false
  # Additional HorizontalPodAutoscaler labels
  labels: {}
  # The minimum and maximum number of replicas for the Keycloak StatefulSet
  minReplicas: 3
  maxReplicas: 8
  # The metrics to use for scaling
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 80
  # The scaling policy to use. This will scale up quickly but only scale down a single Pod per 5 minutes.
  # This is important because caches are usually only replicated to 2 Pods and if one of those Pods is terminated this will give the cluster time to recover.
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
        - type: Pods
          value: 1
          periodSeconds: 300

test:
  # If `true`, test resources are created
  enabled: false
  image:
    # The image for the test Pod
    repository: docker.io/unguiculus/docker-python3-phantomjs-selenium
    # The tag for the test Pod image
    tag: v1
    # The image pull policy for the test Pod image
    pullPolicy: IfNotPresent
  # SecurityContext for the entire test Pod
  podSecurityContext:
    fsGroup: 1000
  # SecurityContext for the test container
  securityContext:
    runAsUser: 1000
    runAsNonRoot: true
  # See https://helm.sh/docs/topics/charts_hooks/#hook-deletion-policies
  deletionPolicy: before-hook-creation
jrivers96 commented 2 years ago

I just saw this -

Keycloak 17+ (e.g. quay.io/keycloak/keycloak:17.0.0) doesn't support autogeneration of selfsigned cert. Minimal HTTPS working example for Keycloak 17+:

https://stackoverflow.com/questions/49859066/keycloak-docker-https-required/49874353#49874353

jrivers96 commented 2 years ago

I personally fixed this problem be ensuring the ssl-context was set to kcSSLContext rather than the Application security realm. I'm not sure if this was the same problem mentioned in this ticket. /subsystem=undertow/server=default-server/https-listener=https-admin:add(socket-binding=https-admin, ssl-context="kcSSLContext", enable-http2=true)

github-actions[bot] commented 2 years ago

This issue has been marked as stale because it has been open for 30 days with no activity. It will be automatically closed in 10 days if no further activity occurs.