Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
Helm client/server: 2.11.0/2.11.0
Which chart:
stable/keycloak
What happened:
Deployed a single install that also spins up the postgres chart.
Afterwards tried changing the pasepath + ingress and i can no longer login to the security admin console.: Incorrect redirect
What you expected to happen:
I would expect it to correctly redirect.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
These are my values:
init:
image:
repository: alpine
tag: 3.8
pullPolicy: IfNotPresent
clusterDomain: cluster.local
keycloak:
replicas: 1
image:
repository: jboss/keycloak
tag: 4.6.0.Final
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
pullSecrets:
- name: docker-reg-secret
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## The path keycloak will be served from. To serve keycloak from the root path, use two quotes (e.g. "").
basepath: "keyauth"
## Additional init containers, e. g. for providing custom themes
extraInitContainers: |
- name: keycloak-custom-providers
image: docker.limeds.be/ibcndevs/keycloak-custom-plugins:1.0.2
imagePullPolicy: IfNotPresent
command:
- sh
args:
- -c
- |
cp -R /myproviders/* /providers
volumeMounts:
- name: providers
mountPath: /providers
## Add additional volumes and mounts, e. g. for custom themes
extraVolumeMounts: |
- name: providers
mountPath: /opt/jboss/keycloak/standalone/deployments/
extraVolumes: |
- name: providers
emptyDir: {}
## Additional sidecar containers, e. g. for a database proxy, such as Google's cloudsql-proxy
extraContainers: |
## Custom script that is run before Keycloak is started.
preStartScript:
## Additional arguments to start command e.g. -Dkeycloak.import= to load a realm
extraArgs: ""
## Username for the initial Keycloak admin user
username: admin
## Password for the initial Keycloak admin user
## If not set, a random 10 characters password will be used
password: ""
## Allows the specification of additional environment variables for Keycloak
extraEnv: |
- name: PROXY_ADRESS_FORWARDING
value: "true"
# - name: KEYCLOAK_LOGLEVEL
# value: DEBUG
# - name: WILDFLY_LOGLEVEL
# value: DEBUG
# - name: CACHE_OWNERS
# value: "2"
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "keycloak.name" . }}
release: "{{ .Release.Name }}"
matchExpressions:
- key: role
operator: NotIn
values:
- test
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ template "keycloak.name" . }}
release: "{{ .Release.Name }}"
matchExpressions:
- key: role
operator: NotIn
values:
- test
topologyKey: failure-domain.beta.kubernetes.io/zone
nodeSelector: {}
tolerations: []
## Extra Annotations to be added to pod
podAnnotations: {}
livenessProbe:
initialDelaySeconds: 120
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 30
timeoutSeconds: 1
resources: {}
# limits:
# cpu: "100m"
# memory: "1024Mi"
# requests:
# cpu: "100m"
# memory: "1024Mi"
## WildFly CLI configurations. They all end up in the file 'keycloak.cli' configured in the configmap whichn is
## executed on server startup.
cli:
## Sets the node identifier to the node name (= pod name). Node identifiers have to be unique. They can have a
## maximum length of 23 characters. Thus, the chart's fullname template truncates its length accordingly.
nodeIdentifier: |
# Makes node identifier unique getting rid of a warning in the logs
/subsystem=transactions:write-attribute(name=node-identifier, value=${jboss.node.name})
logging: |
# Allow log level to be configured via environment variable
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=level, value=${env.WILDFLY_LOGLEVEL:INFO})
/subsystem=logging/root-logger=ROOT:write-attribute(name=level, value=${env.WILDFLY_LOGLEVEL:INFO})
# Log only to console
/subsystem=logging/root-logger=ROOT:write-attribute(name=handlers, value=[CONSOLE])
reverseProxy: |
/socket-binding-group=standard-sockets/socket-binding=proxy-https:add(port=443)
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=redirect-socket, value=proxy-https)
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding, value=true)
ha: |
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=jgroups/channel=ee:write-attribute(name=stack, value=tcp)
# Custom CLI script
custom: ""
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
service:
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
labels: {}
# key: value
## ServiceType
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
type: ClusterIP
## Optional static port assignment for service type NodePort.
# nodePort: 30000
port: 80
## Ingress configuration.
## ref: https://kubernetes.io/docs/user-guide/ingress/
ingress:
enabled: true
path: /keyauth
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefix
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# ingress.kubernetes.io/affinity: cookie
## List of hosts for the ingress
hosts:
- my-host.com
## TLS configuration
tls: []
# - hosts:
# - keycloak.example.com
# secretName: tls-keycloak
## Persistence configuration
persistence:
# If true, the Postgres chart is deployed
deployPostgres: true
# The database vendor. Can be either "postgres", "mysql", "mariadb", or "h2"
dbVendor: postgres
## The following values only apply if "deployPostgres" is set to "false"
# Specifies an existing secret to be used for the database password
existingSecret: ""
# The key in the existing secret that stores the password
existingSecretKey: password
dbName: keycloak
dbHost: mykeycloak
dbPort: 5432
dbUser: keycloak
# Only used if no existing secret is specified. In this case a new secret is created
dbPassword: ""
postgresql:
### PostgreSQL User to create.
##
postgresUser: keycloak
## PostgreSQL Password for the new user.
## If not set, a random 10 characters password will be used.
##
postgresPassword: ""
## PostgreSQL Database to create.
##
postgresDatabase: keycloak
## Persistent Volume Storage configuration.
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
##
persistence:
## Enable PostgreSQL persistence using Persistent Volume Claims.
##
enabled: true
test:
image:
repository: unguiculus/docker-python3-phantomjs-selenium
tag: v1
pullPolicy: IfNotPresent
Going back to the old auth basepath, allows me to look at the redirect setting for the security-admin-console. It says /auth/...* but even adding a /keyauth/...* and then upgrading with the steps above, gives the same results. (although the landing page itself is hosted on /keyauth in all cases.
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: Helm client/server: 2.11.0/2.11.0
Which chart: stable/keycloak
What happened: Deployed a single install that also spins up the postgres chart. Afterwards tried changing the pasepath + ingress and i can no longer login to the security admin console.: Incorrect redirect
What you expected to happen: I would expect it to correctly redirect.
How to reproduce it (as minimally and precisely as possible):
keyauth
/keyauth
helm upgrade myrelease stable/keycloak -f values.yaml
Anything else we need to know: These are my values:
Going back to the old
auth
basepath, allows me to look at the redirect setting for the security-admin-console. It says/auth/...*
but even adding a/keyauth/...*
and then upgrading with the steps above, gives the same results. (although the landing page itself is hosted on/keyauth
in all cases.