Closed vijaykumarv7 closed 1 year ago
Hi,
I was able to reproduce the issue by deploying it with multiple replicas. I will open a task for investigation. Thank you so much for reporting.
Hi,
Is there any update on my issue?
Hi,
I'm afraid it is still in our backlog. As soon as there are news, we will update the ticket.
Hi,
I would like to hear if there is any update on my issue?
@vijaykumarv7 You are using chart-version 9.6.6
; I think the cache.authOwnersCount
and cache.ownersCount
are not supported anymore (they were in chart-version 7.x).
In the 9.x
chart-versions, you need to explicitly set cache.enabled
to true
, as the default is false
:
cache:
enabled: true
Running this on a clean-install works for me; the logs tell me that the infinispan-cluster is formed. Also did the following validation: running two replicas, log-in to Keycloak, kill one replica and i'm still logged in. Killing the other replica (once the previous one is up and running again) and I'm still logged in.
@jordi-t Thanks ton! it's working as expected.
@jordi-t and @vijaykumarv7 Folks, can you please share your values file with us that you use? Thanks.
@jordi-t and @vijaykumarv7 Folks, can you please share your values file with us that you use? Thanks.
@mkuendig helm install --set replicaCount=2 --set service.type=ClusterIP --set cache.enabled=true keycloak bitnami/keycloak
Hi there!
We have just released a new major version of the bitnami/keycloak chart (11.0.0) which by default sets the value cache.enabled=true
.
Name and Version
bitnami/keycloak 9.6.6
What steps will reproduce the bug?
I have installed the chart on Azure kubernetes cluster . I've been deploying keycloak in a HA (3 pods) scenario by setting replicaCount to > 1. I have integrated the keycloak with our frontend portal where user will hit on particular realm and do their BAU. when the replica is set to 1 that the issue isn't not appearing. When I scale the STS to more than 1 then the users are unable login to their relam. Issue is not on realm only on HA scenario.
The pods are starting properly without any error messages. However, I'm a bit suspicious about the infinispan cluster creation as every node reports that no members were discovered and the cluster is created as coordinator. Each Pod is not discover themselves. Previous wildfly distrubtion chart has serviceDiscovery flag where we use to set to enabled. The present version that flag has been removed.
The helm charts from Codecentric is working fine with HA scenario
Here are the relevant logs:
pod keycloak-2:
When refered the keycloak issue post where they clearly saying it's having issue with bitnami helm chart docker image. - https://keycloak.discourse.group/t/ha-setup-in-kubernetes/15874/11
Please help us to mitigate this issue
Are you using any custom parameters or values?
What is the expected behavior?
The keycloak should able to login
What do you see instead?
when I inspect the keyclaok. I'm getting this error when I run the pod more than 1 replica.