Closed momilo closed 2 months ago
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale
. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh
will mark this issue as not stale.
This issue is being closed since there has been no activity for 14 days since marking it as stale
. If you still need help, feel free to comment or reopen the issue!
/fresh ?
It's still a problem :-(
@mergenci - sorry for the confusion. Shortly after my follow-up comment this fix went in and was released.
We have deployed the fix and it seems like it has, indeed, addressed this issue as well. We will continue monitoring, but - to the best of my current knowledge - it's no longer a problem.
Is there an existing issue for this?
Affected Resource(s)
xpkg.upbound.io/upbound/provider-gcp-pubsub
Resource MRs required to reproduce the bug
k describe provider provider-gcp-pubsub
k get pod provider-gcp-pubsub-4f8a71eab319-85688d99c-t5pwq -o=yaml -n=crossplane-system
Steps to Reproduce
topics.pubsub.gcp.upbound.io
), and c. 100 subscriptions (subscription.pubsub.gcp.upbound.io
) + related IAM (c. 100topiciammembers.pubsub.gcp.upbound.io
+ c. 100subscriptioniammembers.pubsub.gcp.upbound.io
)What happened?
The pubsub provider's pod memory usage grows up to c. 20GB throughout the day (then gets OOM-ed).
This is true for both provider v1.0.1 + crossplane 1.15.1, and for provider v1.0.0 + crossplane 1.15.
Ssh-ing into the pod and running
top
confirms that all memory is used by theprovider
application (note - the below screenshots were not taken at the same time, hence the different memory usage reported).See:
Note that:
crossplane-system
(crossplane, crossplane-rbac-manager, and provider-gcp-cloudplatform) are behaving fine under this configuration.I have not had a chance to recompile the provider with pprof enabled to investigate further.
Relevant Error Output Snippet
Note that the system is otherwise working fine, all topics and subscriptions are shown to be ready and in-sync, and there are no logs produced by the crossplane stack, indicating any issues.
Crossplane Version
1.15.1
Provider Version
1.0.1
Kubernetes Version
v1.28.5-gke.1217000
Kubernetes Distribution
GKE
Additional Info
Alas, my initial hope that plugging-in the debug logging (Issue 471 already kindly addressed) would also resolve the memory leak did not come true :-(.