bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.6k stars 8.98k forks source link

[bitnami/keycloak] keycloak-config-cli not working on aws #5779

Closed iamaverrick closed 3 years ago

iamaverrick commented 3 years ago

Which chart: The name (and version) of the affected chart keycloak 2.3.0

Describe the bug When enabling keycloakConfigCli helm chart fails due to ERROR: command for release [keycloak] returned [ 1 ] exit code and error message [ Error: failed post-install: job failed: BackoffLimitExceeded ]. after looking at the logs by running the following command kubectl logs keycloak-keycloak-config-cli-65z6n -n keycloak the following logs are:

2021-03-14 18:57:26.095  INFO 1 --- [           main] d.a.k.config.KeycloakConfigApplication   : Starting KeycloakConfigApplication v3.1.0 using Java 11.0.10 on keycloak-keycloak-config-cli-65z6n with PID 1 (/opt/bitnami/keycloak-config-cli/keycloak-config-cli-12.0.3.jar started by ? in /opt/bitnami/keycloak-config-cli)
2021-03-14 18:57:26.099  INFO 1 --- [           main] d.a.k.config.KeycloakConfigApplication   : No active profile set, falling back to default profiles: default
2021-03-14 18:57:27.131  INFO 1 --- [           main] d.a.k.config.KeycloakConfigApplication   : Started KeycloakConfigApplication in 1.897 seconds (JVM running for 2.768)
2021-03-14 18:57:27.785  INFO 1 --- [           main] d.a.k.c.provider.KeycloakImportProvider  : Importing file '/config/company-config.yml'
2021-03-14 18:57:28.248  INFO 1 --- [           main] d.a.k.c.provider.KeycloakImportProvider  : Importing file '/config/company-configs.yml'
2021-03-14 18:57:28.344  INFO 1 --- [           main] d.a.k.config.provider.KeycloakProvider   : Wait 120 seconds until http://keycloak-headless:8080/auth is available ...
2021-03-14 18:58:27.282  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.283  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.283  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.284  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.285  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.286  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:58:27.287  WARN 1 --- [      Finalizer] org.jboss.resteasy.client.jaxrs.i18n     : RESTEASY004687: Closing a class org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient43Engine instance for you. Please close clients yourself.
2021-03-14 18:59:28.364 ERROR 1 --- [           main] d.a.k.config.KeycloakConfigRunner        : Could not connect to keycloak in 120 seconds: HTTP 403 Forbidden
2021-03-14 18:59:28.365  INFO 1 --- [           main] d.a.k.config.KeycloakConfigRunner        : keycloak-config-cli running in 02:00.583.

im thinking the following is the issue http://keycloak-headless:8080/auth is available. when using a cload provider we will need to use a LB which usually work on port 80. also http when it should be https...

when running locally everything works fine because i run the service on port 8080 anyway so the config0cli is able to find the service.

marcosbc commented 3 years ago

Hi @iamaverrick, your error looks like an authentication issue. Could it be that the deployment is using persisted data with wrong credentials? To check that, you should be able to deploy a different release name / namespace, and check if the error still happens.

In addition, could you share the values.yaml file used to deploy Keycloak?

iamaverrick commented 3 years ago

hello @marcosbc

i dont think thats the case because at the moment i'm still testing on production environment and i have the configuration set to false. but as requested her is my current configuration file. we are having another issue #5074 which im not sure if can be related since keycloak config cli is using the headless services. thank in advance

  ## PostgreSQL data Persistent Volume Storage Class
  ##
  persistence:
    enabled: false
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass

## Force target Kubernetes version (using Helm capabilites if not set)
##
kubeVersion:

## String to partially override aspnet-core.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override aspnet-core.fullname template
##
# fullnameOverride:

## Add labels to all the deployed resources
##
commonLabels: {}

## Add annotations to all the deployed resources
##
commonAnnotations: {}

## Kubernetes Cluster Domain
##
clusterDomain: cluster.local

## Extra objects to deploy (value evaluated as a template)
##
extraDeploy: []

## Bitnami Keycloak image version
## ref: https://hub.docker.com/r/bitnami/keycloak/tags/
##
image:
  registry: docker.io
  repository: bitnami/keycloak
  tag: 12.0.4-debian-10-r9
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ## Example:
  ## pullSecrets:
  ##   - myRegistryKeySecretName
  ##
  pullSecrets: []
  ## Set to true if you would like to see extra information on logs
  ##
  debug: true

## Keycloak authentication parameters
## ref: https://github.com/bitnami/bitnami-docker-keycloak#admin-credentials
##
auth:
  ## Create administrator user on boot.
  ##
  createAdminUser: true
  ## Keycloak administrator user and password
  ##
  adminUser: "testing"
  adminPassword: "testing"
  ## Wildfly management user and password
  ##
  managementUser: "testing"
  managementPassword: "testing"
  ## An already existing secret containing auth info
  ##
  # existingSecret:
  #   name: mySecret
  #   keyMapping:
  #     admin-password: myPasswordKey
  #     management-password: myManagementPasswordKey
  #     database-password: myDatabasePasswordKey
  #     tls-keystore-password: myTlsKeystorePasswordKey
  #     tls-truestore-password: myTlsTruestorePasswordKey
  #
  ## Map of already existing secrets containing passwords
  ##
  ## Override `existingSecret` and other secret values
  ##
  # existingSecretPerPassword:
  #   keyMapping:
  #     adminPassword: KEYCLOAK_ADMIN_PASSWORD
  #     managementPassword: KEYCLOAK_MANAGEMENT_PASSWORD
  #     databasePassword: password
  #     tlsKeystorePassword: JKS_KEYSTORE_TRUSTSTORE_PASSWORD
  #     tlsTruststorePassword: JKS_KEYSTORE_TRUSTSTORE_PASSWORD
  #   adminPassword:
  #     name: keycloak-test2.credentials # release-name
  #   managementPassword:
  #     name: keycloak-test2.credentials
  #   databasePassword:
  #     name: keycloak.pocwatt-keycloak-cluster.credentials
  #   tlsKeystorePassword:
  #     name: keycloak-test2.credentials
  #   tlsTruststorePassword:
  #     name: keycloak-test2.credentials
  #
  ## TLS encryption parameters
  ## ref: https://github.com/bitnami/bitnami-docker-keycloak#tls-encryption
  ##
  tls:
    enabled: false
    ## Name of the existing secret containing the truststore and one keystore per Keycloak replica
    ## Create this secret following the steps below:
    ## 1) Generate your trustore and keystore files (more info at https://github.com/keycloak/keycloak-documentation/blob/master/openshift/topics/advanced_concepts.adoc#creating-https-and-jgroups-keystores-and-truststore-for-the-project_name-server)
    ## 2) Rename your truststore to `keycloak.truststore.jks`.
    ## 3) Rename your keystores to `keycloak-X.keystore.jks` where X is the ID of each Keycloak replica
    ## 4) Run the command below where SECRET_NAME is the name of the secret you want to create:
    ##       kubectl create secret generic SECRET_NAME --from-file=./keycloak.truststore.jks --from-file=./keycloak-0.keystore.jks --from-file=./keycloak-1.keystore.jks ...
    ##
    # jksSecret:
    ## Password to access the keystore when it's password-protected.
    ##
    keystorePassword: ""
    ## Password to access the truststore when it's password-protected.
    ##
    truststorePassword: ""

## Enable Proxy Address Forwarding
## ref: https://www.keycloak.org/docs/latest/server_installation/#_setting-up-a-load-balancer-or-proxy
##
proxyAddressForwarding: true

## Keycloak Service Discovery settings
## ref: https://github.com/bitnami/bitnami-docker-keycloak#cluster-configuration
##
serviceDiscovery:
  enabled: true
  ## Sets the protocol that Keycloak nodes would use to discover new peers
  ## Available protocols can be found at http://www.jgroups.org/javadoc3/org/jgroups/protocols/
  ##
  protocol: kubernetes.KUBE_PING
  ## Properties for the discovery protocol set in serviceDiscovery.protocol parameter
  ## List of key=>value pairs
  ## Example:
  ## properties:
  ##   - datasource_jndi_name=>"java:jboss/datasources/KeycloakDS"
  ##   - initialize_sql=>"CREATE TABLE IF NOT EXISTS JGROUPSPING ( own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, created timestamp default current_timestamp, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name))"
  ##
  properties: []
  ## Transport stack for the discovery protocol set in serviceDiscovery.protocol parameter
  ##
  transportStack: tcp

## Keycloak cache settings
## ref: https://github.com/bitnami/bitnami-docker-keycloak#cluster-configuration
##
cache:
  ## Number of nodes that will replicate cached data
  ##
  ownersCount: 1
  ## Number of nodes that will replicate cached authentication data
  ##
  authOwnersCount: 1

## Keycloak Configuration
## Specify content for standalone-ha.xml
## NOTE: This will override configuring Keycloak based on environment variables (including those set by the chart)
## The standalone-ha.xml is auto-generated based on other parameters when this parameter is not specified
##
## Example:
## configuration: |-
##    foo: bar
##    baz:
##
# configuration:

## Existing ConfigMap with Keycloak Configuration
## NOTE: When it's set the configuration parameter is ignored
##
# existingConfigmap:

## Add extra args to default startup command
##
extraStartupArgs:

## initdb scripts
## Specify dictionary of scripts to be run at first boot
## ref: https://github.com/bitnami/bitnami-docker-keycloak#initializing-a-new-instance
## Example:
## initdbScripts:
##   my_init_script.sh: |
##      #!/bin/bash
##      echo "Do something."
##
initdbScripts: {}

## Existing ConfigMap with custom init scripts
##
# initdbScriptsConfigMap:

## Command and args for running the container (set to default if not set). Use array form
##
command: []
args: []

## An array to add extra env vars
## Example:
## extraEnvVars:
##   - name: FOO
##     value: "bar"
##
extraEnvVars: []
#  - name: SECRETS
#    value: "/var/keycloak/secrets"

## ConfigMap with extra environment variables
##
extraEnvVarsCM:

## Secret with extra environment variables
##
extraEnvVarsSecret:

## Number of Keycloak replicas to deploy
##
replicaCount: 1

## Keycloak container ports to open
##
containerPorts:
  http: 8080
  https: 8443

## Keycloak containers' SecurityContext
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
  enabled: true
  fsGroup: 1001

## Keycloak pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
containerSecurityContext:
  enabled: true
  runAsUser: 1001
  runAsNonRoot: true

## Keycloak resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits: {}
  #   cpu: 200m
  #   memory: 256Mi
  requests: {}
  #   cpu: 200m
  #   memory: 10Mi

## Keycloak containers' liveness and readiness probes.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
##
livenessProbe:
  enabled: true
  httpGet:
    path: /auth/
    port: http
  initialDelaySeconds: 300
  periodSeconds: 1
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  httpGet:
    path: /auth/realms/master
    port: http
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 1
  failureThreshold: 3
  successThreshold: 1

## Custom Liveness probes for Keycloak
##
customLivenessProbe: {}

## Custom Rediness probes Keycloak
##
customReadinessProbe: {}

## Strategy to use to update Pods
##
updateStrategy:
  ## StrategyType
  ## Can be set to RollingUpdate or OnDelete
  ##
  type: RollingUpdate

## Pod affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAffinityPreset: ""

## Pod anti-affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAntiAffinityPreset: soft

## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
##
nodeAffinityPreset:
  ## Node affinity type
  ## Allowed values: soft, hard
  ##
  type: ""
  ## Node label key to match
  ## E.g.
  ## key: "kubernetes.io/e2e-az-name"
  ##
  key: ""
  ## Node label values to match
  ## E.g.
  ## values:
  ##   - e2e-az1
  ##   - e2e-az2
  ##
  values: []

## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}

## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## Pod extra labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}

## Annotations for server pods.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}

## Keycloak pods' priority.
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""

## lifecycleHooks for the Keycloak container to automate configuration before or after startup.
##
lifecycleHooks: {}

## Extra volumes to add to the deployment
##
extraVolumes: []
#  - name: keycloak-secrets
#    secret:
#      secretName: keycloak-secrets

## Extra volume mounts to add to the container
##
extraVolumeMounts: []
#  - mountPath: "/var/keycloak/secrets"
#    name: keycloak-secrets
#    readOnly: true

## Add init containers to the Keycloak pods.
## Example:
## initContainers:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
initContainers: {}

## Add sidecars to the Keycloak pods.
## Example:
## sidecars:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
sidecars: {}

## Service configuration
##
service:
  ## Service type.
  ##
  type: ClusterIP
#  type: NodePort # Dev purposes
  ## HTTP Port
  ##
  port: 80
  ## HTTPS Port
  ##
  httpsPort: 443
  ## Specify the nodePort values for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePorts:
    http: ""
    https: ""
  ## Service clusterIP.
  ##
  # clusterIP: None
  ## loadBalancerIP for the SuiteCRM Service (optional, cloud specific)
  ## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
  ##
  # loadBalancerIP:
  ## Load Balancer sources
  ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ## Example:
  ## loadBalancerSourceRanges:
  ##   - 10.10.10.0/24
  ##
  loadBalancerSourceRanges: []
  ## Enable client source IP preservation
  ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
  ##
  externalTrafficPolicy: Cluster
  ## Provide any additional annotations which may be required (evaluated as a template).
  ##
  annotations: {}

## Ingress configuration
##
ingress:
  ## Set to true to enable ingress record generation
  ##
  enabled: true

  ## Set this to true in order to add the corresponding annotations for cert-manager
  ##
  certManager: false

  ## When the ingress is enabled, a host pointing to this will be created
  ##
  hostname: keycloak.company.com
#  hostname: keycloak.local # Dev purposes

  ## Override API Version (automatically detected if not set)
  ##
  apiVersion:

  ## Ingress Path
  ##
  path: /

  ## Ingress Path type
  ##
  pathType: ImplementationSpecific

  ## Ingress annotations done as key:value pairs
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
  ##
  ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
  ##
  annotations:
    kubernetes.io/ingress.class: "internal.company.com"
    nginx.org/redirect-to-https: "True"
    ingress.kubernetes.io/ssl-redirect: "False"
#    nginx.org/server-snippets: |
#      location /auth {
#        proxy_set_header X-Forwarded-For $host;
#        proxy_set_header X-Forwarded-Proto $scheme;
#      }

#    nginx.org/hsts: "True"
#    nginx.org/hsts-behind-proxy: "True"

  ## Enable TLS configuration for the hostname defined at ingress.hostname parameter
  ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
  ## You can use the ingress.secrets parameter to create this TLS secret, relay on cert-manager to create it, or
  ## let the chart create self-signed certificates for you
  ##
  tls: false

  ## The list of additional hostnames to be covered with this ingress record.
  ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
  ## Example:
  ## extraHosts:
  ##   - name: keycloak.local
  ##     path: /
  ##
  extraHosts: []

  ## The tls configuration for additional hostnames to be covered with this ingress record.
  ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  ## Example:
  ## extraTls:
  ## - hosts:
  ##     - keycloak.local
  ##   secretName: keycloak.local-tls
  ##
  extraTls: []

  ## If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or -----BEGIN RSA PRIVATE KEY-----
  ## name should line up with a secretName set further up
  ##
  ## If it is not set and you're using cert-manager, this is unneeded, as it will create the secret for you
  ## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  ##
  ## Example
  ## secrets:
  ##   - name: aspnet-core.local-tls
  ##     key: ""
  ##     certificate: ""
  ##
  secrets: []

## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
networkPolicy:
  ## Enable creation of NetworkPolicy resources
  ##
  enabled: false
  ## The Policy model to apply. When set to false, only pods with the correct
  ## client label will have network access to the ports Keycloak is listening
  ## on. When true, Keycloak will accept connections from any source
  ## (with the correct destination port).
  ##
  allowExternal: true
  ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
  ## Example:
  ## additionalRules:
  ##   - matchLabels:
  ##       - role: frontend
  ##   - matchExpressions:
  ##       - key: role
  ##         operator: In
  ##         values:
  ##           - frontend
  ##
  additionalRules: {}

## Specifies whether RBAC resources should be created
##
rbac:
  create: true
  ## Custom RBAC rules
  ## Example:
  ## rules:
  ##   - apiGroups:
  ##       - ""
  ##     resources:
  ##       - pods
  ##     verbs:
  ##       - get
  ##       - list
  ##
  rules: []

## Specifies whether a ServiceAccount should be created
##
serviceAccount:
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the fullname template
  ##
  name: ""

## Keycloak Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
  create: false
  ## Min number of pods that must still be available after the eviction
  ##
  minAvailable: 1
  ## Max number of pods that can be unavailable after the eviction
  ##
  # maxUnavailable: 1

## Keycloak Autoscaling configuration
##
autoscaling:
  enabled: true
  minReplicas: 1
  maxReplicas: 5
  # targetCPU: 50
  # targetMemory: 50

## Metrics configuration
##
metrics:
  ## Enable Keycloak statistics
  ## ref: https://github.com/bitnami/bitnami-docker-keycloak#enabling-statistics
  ##
  enabled: true

  ## Keycloak metrics service parameters
  ##
  service:
    ## HTTP management port
    ##
    port: 9990
    ## Annotations for the Prometheus exporter service
    ##
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "{{ .Values.metrics.service.port }}"

  ## Prometheus Operator ServiceMonitor configuration
  ##
  serviceMonitor:
    ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
    ##
    enabled: false
    ## Specify the namespace in which the serviceMonitor resource will be created
    ##
    # namespace: ""
    ## Specify the interval at which metrics should be scraped
    ##
    interval: 30s
    ## Specify the timeout after which the scrape is ended
    ##
    # scrapeTimeout: 30s
    ## Specify Metric Relabellings to add to the scrape endpoint
    ##
    # relabellings:
    ## Specify honorLabels parameter to add the scrape endpoint
    ##
    honorLabels: false
    ## Specify the release for ServiceMonitor. Sometimes it should be custom for prometheus operator to work
    ##
    # release: ""
    ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    ##
    additionalLabels: {}
##
## PostgreSQL chart configuration
## ref: https://github.com/bitnami/charts/blob/master/bitnami/postgresql/values.yaml
##
postgresql:
  ## Whether to deploy a postgresql server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
  ##
  enabled: true
  ## PostgreSQL user (has superuser privileges if username is `postgres`)
  ## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
  ##
  postgresqlUsername: "test"
  ## PostgreSQL password
  ## Defaults to a random 10-character alphanumeric string if not set
  ## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
  ##
  postgresqlPassword: "test"
  ## Database name to create
  ## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run
  ##
  postgresqlDatabase: "test"
  ## PostgreSQL data Persistent Volume Storage Class
  ##
  persistence:
    enabled: false

##
## External database configuration
##
externalDatabase:
  ## Database host
  ##
  host: ""
  ## Database port
  ##
  port: 5432
  ## non admin username for Keycloak Database
  ##
  user: "test"
  ## Database password
  ##
  password: "test"
  ## Database name
  ##
  database: "test"

## Configuration for keycloak-config-cli
## ref: https://github.com/adorsys/keycloak-config-cli
##
keycloakConfigCli:
  ## Whether to enable keycloak-config-cli
  ##
  enabled: true

  ## Bitnami keycloak-config-cli image
  ## ref: https://hub.docker.com/r/bitnami/keycloak-config-cli/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/keycloak-config-cli
    tag: 3.1.0-debian-10-r16
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []

  ## Annotations for keycloak-config-cli job
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  annotations:
    helm.sh/hook: "post-install,post-upgrade,post-rollback"
    helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
    helm.sh/hook-weight: "5"

  ## Command and args for running the container (set to default if not set). Use array form
  ##
  command: []
  args: []

  ## Job pod host aliases
  ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
  ##
  hostAliases: []

  ## keycloak-config-cli resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 200m
    #   memory: 256Mi
    requests: {}
    #   cpu: 200m
    #   memory: 10Mi

  ## keycloak-config-cli containers' Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
  ##
  containerSecurityContext:
    enabled: true
    runAsUser: 1001
    runAsNonRoot: true

  ## keycloak-config-cli pods' Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ##
  podSecurityContext:
    enabled: true
    fsGroup: 1001

  ## Number of retries before considering a Job as failed
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy
  ##
  backoffLimit: 1

  ## Pod extra labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}

  ## Annotations for job pod
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: {}

  ## Additional environment variables to set
  ## Example:
  ## extraEnvVars:
  ##   - name: FOO
  ##     value: "bar"
  ##
  extraEnvVars: []

  ## ConfigMap with extra environment variables
  ##
  extraEnvVarsCM:

  ## Secret with extra environment variables
  ##
  extraEnvVarsSecret:

  ## Extra volumes to add to the job
  ##
  extraVolumes: []

  ## Extra volume mounts to add to the container
  ##
  extraVolumeMounts: []

  ## keycloak-config-cli configuration files
  ## NOTE: nil keys will be considered files to import locally
  ## Example:
  ## configuration:
  ##   realm1.json: |
  ##     {
  ##       "realm": "realm1",
  ##       "clients": []
  ##     }
  ##   files/realm2.yaml:
  ##   realm3.yaml: |
  ##     realm: realm3
  ##     clients: []
  ##
  configuration: {}

  ## ConfigMap with keycloak-config-cli configuration
  ## NOTE: This will override keycloakConfigCli.configuration
  ##
  existingConfigmap: kc-config
marcosbc commented 3 years ago

Hi @iamaverrick, we were able to get the CLI working without issues, using this configuration:

auth:
  adminUser: testing
  adminPassword: testing
  managementUser: testing
  managementPassword: testing
ingress:
  enabled: true
  hostname: keycloak.fuf.me
keycloakConfigCli:
  enabled: true
  configuration:
    realm1.json: |
      {
        "realm": "realm1",
        "clients": []
      }
    realm3.yaml: |
      realm: realm3
      clients: []

When we access via the UI, the realms were created without issues, so it seems to be something related to your configuration.

Could you check if you are able to access the headless endpoint from the Keycloak container?

$ k exec -it keycloak-0 -- bash
$ curl -I http://keycloak-headless:8080/auth
iamaverrick commented 3 years ago

Hello @marcosbc

here is what i got when i ran the commands above. i basically had to disable keycloakConfigCli: false in order to get the service to run on the cloud the run the command. because if i enable it fails to start.

kubectl exec -it keycloak-0 -- bash
I have no name!@keycloak-0:/$ curl -I http://keycloak-headless:8080/auth
HTTP/1.1 303 See Other
Connection: keep-alive
X-XSS-Protection: 1; mode=block
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
Location: http://keycloak-headless:8080/auth/
Referrer-Policy: no-referrer
Content-Length: 0
Date: Tue, 16 Mar 2021 21:39:16 GMT
marcosbc commented 3 years ago

Hi @iamaverrick, sorry for having to ask this again, but it seems there is a redirection that blocks this. Could you try this?

$ k exec -it keycloak-0 -- bash
$ curl -L -I http://keycloak-headless:8080/auth/

We'd like to understand why the HTTP 403 error occurs on your side, since we were not able to get anything remotely similar. Even with forced wrong credentials, a different HTTP code 401 Unauthorized.

Once you get it working, you can also try and replicate the keycloak-config-cli commands by executing them manually with the environment used by the Helm chart (you can obtain that via kubectl describe pod ... for the CLI pod).

Since the pod runs as a cron job and will exit immediately, you could either try to install keycloak-config-cli in the same way as the Docker image temporarily in the Keycloak pod, or temporarily add an external pod just to debug this issue, using the keycloak-config-cli image. That should allow you to debug this issue further.

iamaverrick commented 3 years ago

Hello @marcosbc,

i was able to run the config-cli helm chart for debug purposes with the following and it works...

env:
  KEYCLOAK_URL: "https://keycloak.company.com"
  KEYCLOAK_USER: "username"
  KEYCLOAK_PASSWORD: "password" 

by explicitly defining the url i was able to run the config with no issues in the production environment. problem is that the sub chart does not use this. i should be able to use the headless service to run the config. i'm not sure what could be causing this

i also tried these each separately and it fails. i'm not sure why

env: KEYCLOAK_URL: "http://keycloak.cloud:80" KEYCLOAK_URL: "http://keycloak.cloud:443" KEYCLOAK_URL: "http://keycloak-headless.cloud:80"

marcosbc commented 3 years ago

i also tried these each separately and it fails. i'm not sure why

Could you share more details? We'd like to understand the specific error you're seeing.

That said, note that we are still unable to reproduce your issues with the values.yaml file we shared above. In fact, while the keycloak-config-cli was waiting to connect to Keycloak we could enter its shell (via kubectl exec -ti POD_NAME bash).

Note that until the main Keycloak pod shows Admin console listening on http://127.0.0.1:9990 in the console, it will not have completed initialization and you may be unable to connect:

$ curl -vvv http://mbkc1-keycloak-headless:8080/auth/
...
*   Trying 10.30.2.122...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x563a0a794fb0)
* connect to 10.30.2.122 port 8080 failed: Connection refused
* Failed to connect to mbkc1-keycloak-headless port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to mbkc1-keycloak-headless port 8080: Connection refused

Once Keycloak is running it works for us without issues:

$ curl -vvv http://mbkc1-keycloak-headless:8080/auth/
...
*   Trying 10.30.2.122...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x560dc2c65fb0)
* Connected to mbkc1-keycloak-headless (10.30.2.122) port 8080 (#0)
> GET /auth/ HTTP/1.1
> Host: mbkc1-keycloak-headless:8080
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Cache-Control: no-cache, must-revalidate, no-transform, no-store
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< Referrer-Policy: no-referrer
< Content-Security-Policy: frame-src 'self'; frame-ancestors 'self'; object-src 'none';
< Date: Thu, 18 Mar 2021 15:07:42 GMT
< Connection: keep-alive
< X-Robots-Tag: none
< Strict-Transport-Security: max-age=31536000; includeSubDomains
< X-Content-Type-Options: nosniff
< Content-Type: text/html;charset=utf-8
< Content-Length: 4084
<
<!--
  ~ JBoss, Home of Professional Open Source.
...

Could it be that Keycloak is not getting initialized in time (in that 120 seconds period) causing the Config CLI to fail?

iamaverrick commented 3 years ago

I don't think it's about not being able to connect in 120s issue. I can't seem to connect or communicate with the headless service for some reason. I trying pinging the headless service and it fails.

Also my issue is in production environment which is k8s running in aws. Please keep in mind and I have mentioned above that locally everything works great meaning in an minikube environment.

Could you please tell me what environment you are running and testing the service. It's it locally using minikube?

marcosbc commented 3 years ago

Hi @iamaverrick, we tested the chart on GKE and it works without issues.

Note that you should even be able to run curl http://keycloak-headless:8080/auth/ from inside of the main Keycloak pod.

If kubectl get endpoints shows the headless service, then there is no reason for that URL not to be accessible via http://keycloak-headless:8080/auth/, unless there is a specific enabled networkPolicy that may cause this, or a network issue specific to your cloud provider.

iamaverrick commented 3 years ago

I see. I will continue to work through this issue and other one as well. Hopefully I can get an update. I'm not to convent because as a work around I can get the Config to work with the keycloak Config cli helm chart. But eventually will want everything to work together. Thanks for your prompt responses and eagerness to help on this issue.

seagyn commented 3 years ago

I am also experiencing this inside of an AWS cluster. I think it might be due to networking/service discovery inside of AWS/their VPC set up.

I could just use the external ingress (route in the case of openshift) endpoint and it worked with just updating the extra env vars in the helm chart under the cli:

extraEnvVars:
  - name: KEYCLOAK_URL
    value: https://sso.mydomain.com/auth

I will see if I can figure it out from an AWS perspective (might be network policy related, might be service account related or even RBAC)

iamaverrick commented 3 years ago

Hell @seagyn

this solution did not and does not work for me because we use external-dns to create the dns in aws so the page does not become available in under 120s so it still fails for me. i think its a networking issue i do set rbac to enabled on cloud environment. ill wait until this gets resolved for us AWS users. in the meantime i will continue to configure keycloak with This

seagyn commented 3 years ago

Hi @iamaverrick, I'm not sure what kind of timelines you have but you can set KEYCLOAK_AVAILABILITYCHECK_TIMEOUT as well.

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 3 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.