Closed nowyp closed 2 months ago
Probably related to: https://github.com/crossplane-contrib/provider-keycloak/issues/65
What kind of resources do you use?
Chart with bandwidth:
There is now also away to get prometheus metrics about resources: https://blog.crossplane.io/crossplane-v1-16/ I have to check out how to implement that, reference: https://github.com/crossplane-contrib/provider-upjet-aws/pull/1281/files
i'll track that in #109 and probably work on it tomorrow
- ProtocolMapper
- Client
- Role
Chart with bandwidth:
So maybe it's really the protocol mappers: https://github.com/crossplane-contrib/provider-keycloak/issues/65#issuecomment-2047479756 i have to check that out as well -> you can try that workaround
Ok - in our case it was just missing introspection.token.claim: "true"
value which case terafform plan/apply loop with some default one in keycloak (so in order to make it correctly all expected values needs to be provided). We're verifiing solution right now.
@Breee the change helps in our case decrese the CPU usage from ~ 7 cores to stable 2-3 cores but it seems that there is some memory leak (maybe in Terraform - I see there is v.1.4.6 version used) which cause that provider over time take more and more meory:
Can you test if the leaks go away with:
xpkg.upbound.io/crossplane-contrib/provider-keycloak:v0.23.0
?
I will have to re-test it multiple times with different memory limits, but in our case after first trials it seems it is still leaking even in v 0.23
@Breee applied the change. Will let you know about the results.
Tracking that in #118 -> will try to fix that asap
please test out @nowyp @vladimirblahoz
xpkg.upbound.io/crossplane-contrib/provider-keycloak:v0.24.0-rc.2
it contains changes of pr https://github.com/crossplane-contrib/provider-keycloak/pull/124
@Breee thanks for update. Applied changes to our dev cluster - let you know later about the outcome
First tests looks promising:
Our long term results don't look so promising.
Keycloak provider pod getting OOMKilled like every hour and a half overflowing its 512Mi memory limit very regularly (running on 0.24.0-rc.2).
Not sure if this can have something to do with broken built-in clients/roles resource constantly and unsuccessfully trying to sync...
Our long term results don't look so promising.
Keycloak provider pod getting OOMKilled like every hour and a half overflowing its 512Mi memory limit very regularly (running on 0.24.0-rc.2).
Not sure if this can have something to do with broken built-in clients/roles resource constantly and unsuccessfully trying to sync...
probably. i have to verify that myself
In our side still stable:
let's consolidate further discussions into #118
Hi,
we noticed very large CPU increase after switch from Keycloak 21.1.2 to 24.0.4. Initially we assumed that the problem was related that we use older provider version (v0.17.0) but we upgraded to the latest v0.22.0 and it did not help.
Below is Grafana CPU usage chart and how is changed when we upgraded Keycloak:
Except the Keycloak upgrade we did not have in this specific release any other changes. We also do not observe any errors or anything like that in our logs for keycloak provider.