Closed TheBigK02 closed 3 weeks ago
It looks like there is no ClientScopeUpdatedEvent
implemented, which would replicate the cache invalidation to the other nodes. (See f.e. ClientUpdatedEvent
, which implements this for Clients.) Only ClientScopeAddedEvent
and ClientScopeRemovedEvent
are implemented. org.keycloak.models.cache.infinispan.RealmCacheSession#registerClientScopeInvalidation
does not add any invalidation event, while registerClientInvalidation
does add a ClientUpdatedEvent
.
The issue was likely caused by this change in #29474: https://github.com/keycloak/keycloak/pull/29474/files#diff-023f59e6fb2ba157704ae7a51b204ddc82506dea04b2d353edeab93ac5a0185cL201
This line was removed from registerClientScopeInvalidation
:
invalidationEvents.add(ClientTemplateEvent.create(id));
I.e. in Keycloak <26, the ClientTemplateEvent
was (ab)used as a ClientScopeUpdatedEvent
.
We have exactly the same problem using a Docker setup with 2 nodes.
@ahus1 @mhajas Can you take a look to this? because it sounds important. Thanks!
@pruivo - can you please check? Thanks!
Except if I'm missing something, the ClientTemplateEvent
has been a no-op since 2016:
May someone try the pull request?
I compiled the PR and tested it as follows:
# run two instances of Keycloak on different ports
bin/kc.sh --verbose start --http-enabled true --hostname-strict false --db postgres --db-password pass --db-username keycloak
bin/kc.sh --verbose start --http-enabled true --hostname-strict false --db postgres --db-password pass --db-username keycloak --http-port 8081 --http-management-port 9001
Then I updated the client scope on each side. This fails with 26.0.0, but works with the PR. The PR is therefore good to be merged. Anything you want to change before marking it ready?
Before reporting an issue
Area
admin/api
Describe the bug
Hello,
Since Keycloak 26.0.0. (in 25.0.6 it works) when an update of a client scope is done on a distributed setup, the Cache Update Message seems to be not replicated to the other Keycloak Pods. When u update the Client scope and reload the ADMIN UI or alternatively use the admin/api directly u get different results depending on which keycloak instance u land on.
For Client Settings the distribution works fine, its an isolated issue on client scopes I believe.
I Use as Cache Stack kubernetes and the Balancing of User Sessions works also perfect.
Version
26.0.0
Regression
Expected behavior
Cache entries of client scopes should be replicated to all members of the cluster
Actual behavior
The Cache entry is not replicated and is fixed when u restart the other keycloak pods or flush the cache.
How to Reproduce?
Create a production grade keycloak cluster and Update the client Scopes on one of the scopes. U will see depending on which keycloak instance u land on the old settings are shown.
Anything else?
No response