Closed Breee closed 2 months ago
related to #115 and #108
Hi, we are running on latest v0.23 of the provider and keycloak v22.0.5. I have given the keycloak provider 2Gi memory limit to get some space for diagnostics, and guess what. Apparently only sky is the limit:
i'll take care of that asap using the method mentioned above - i hope that fixes it for us
I didn't mean to rush you or anything. Just confirming that even on 0.23 the mem leak still prevails. Thanks a bunch for any help
please test out
xpkg.upbound.io/crossplane-contrib/provider-keycloak:v0.24.0-rc.2
it contains changes of pr #124
Hi! It seems to me, that memory consumption is holding on very reasonable levels under the limit 👍
However it has a little drawback as I noticed that this improvement looks like it suddenly stopped working on this version (all builtin roles and clients are stuck in "RecocncileError - external resource does not exist" state)
However it has a little drawback as I noticed that this improvement looks like it suddenly stopped working on this version (all builtin roles and clients are stuck in "RecocncileError - external resource does not exist" state)
This might be false alarm, give me some more time to test it out. Behaves differently on each of our clusters. Might have been just a glitch. 🙇♂️
Yes, you are correct -> Observing is in fact broken and I will investigate
registry.gitlab.com/corewire/images/crossplane/function-keycloak-builtin-objects:v1.0.0
should solve your problem @vladimirblahoz
It's not compatible with v0.23.0 anymore tho because the way we import changes. So This might be the time for a major version update in both cases
So you can test with it and xpkg.upbound.io/crossplane-contrib/provider-keycloak:v1.0.0
if you import more stuff, please look at https://github.com/crossplane-contrib/provider-keycloak/releases/tag/v1.0.0 because the way we import stuff changed
Looks like v1.0.0
of the provider along with the v1.0.0
of the builtin-objects function go along quite well. The built-in roles get loaded very quickly and can be assigned 👍
The memory consumption seems to rise in much slower rate than before.
Don't get me wrong - we still experience regular restarts of the provider and the memory graph still looks like this:
However the provider can operate with 512Mi of memory and restarts every like 5 hours and between its restarts is capable of doing its work (unlike before when with 2Gi was restarting constantly and not syncing anything).
Truth be told we operate tens of resources and not small amount of them are stuck in unsynced or not ready state (for multitude of reasons) so the provider is permanently working its ass off. I still wouldn't expect it to be a reason to get OOMKilled, but I also expect that once we get into more stable state of things this won't be an issue anymore.
So thanks a bunch for unblocking us, if I happen to detect what exactly is causing those ever rising stairs of memory consumption, will let you know.
i think we can close this then for the moment
It seems there are memory leaks and bad CPU performance.
Maybe this helps us to fix that: https://github.com/grafana/crossplane-provider-grafana/issues/107 https://github.com/grafana/crossplane-provider-grafana/pull/113/files