Closed jgrobbel closed 4 years ago
Hi @jgrobbel, sometimes the agent RSS memory can take over 24h to stabilize, this is due to GC behavior.
Can you by any chance increase the memory limits on the agent pod and confirm the RSS continues to increase passed 24 hours? I'm not saying we're not leaking, but we're running 7.21.1
internally and haven't found any leakage problems; that said the problem could also be in one of the integrations (perhaps something we don't use ourselves).
@truthbk Sorry for the delay. I suspect you are right, in other places we are running agents without the leak. It seems related to one of the integrations as you suggest. I will close this for now while I work at isolating where the leak is. Thanks.
Output of the info page (if this is a bug)
Describe what happened:
The agents are slowly using up memory until they key killed by kubernetes for exceeding its resources:
Describe what you expected:
Memory usage should return to normal after bursty events.
Steps to reproduce the issue:
Not 100% sure other than just running it.
Additional environment details (Operating System, Cloud provider, etc):
Running as a kubernetes daemonset on GCP. We are also using the jmx enabled image.