Deploy any resource needing reconciliation from this project, the more resource created the faster the increase
Wait and observe memory increase of the provider pod.
What happened?
The memory kept growing until pod got restarted or OOM kill, see:
Cloudplatform
Redis
Storage
Most of those drops are restarts, curve is steeper on Cloudplatform because it is the most used in our setup. Behavior is consistent across all the modules.
Relevant Error Output Snippet
No response
Crossplane Version
1.14.5
Provider Version
1.1.0
Kubernetes Version
No response
Kubernetes Distribution
No response
Additional Info
Since we identified the culprit, I will not dig into too much details. We will publish a PR with the fix we deployed in order to discuss what should be done.
@IxDay, thanks for your discovery and beautiful report 🙏 I wanted to note here that other providers are likely to be affected, in case underlying terraform providers function similarly (links to relevant lines):
This is probably a (well-observed) generalisation of the issue I noticed with pubsub provider, noted here. I suspect that addressing this at this level would resolve also the issues I've experienced.
Is there an existing issue for this?
Affected Resource(s)
All the providers
Resource MRs required to reproduce the bug
No response
Steps to Reproduce
What happened?
The memory kept growing until pod got restarted or OOM kill, see:
Cloudplatform
Redis
Storage
Most of those drops are restarts, curve is steeper on Cloudplatform because it is the most used in our setup. Behavior is consistent across all the modules.
Relevant Error Output Snippet
No response
Crossplane Version
1.14.5
Provider Version
1.1.0
Kubernetes Version
No response
Kubernetes Distribution
No response
Additional Info
Since we identified the culprit, I will not dig into too much details. We will publish a PR with the fix we deployed in order to discuss what should be done.