We are still seeing an issue with the collector memory after upgrading to 0.109.0 with the fix. The behavior changed and we are now seeing more memory in the stack vs heap. Although the heap still grows slowly over time. Similar to before, removing the hec receiver from the logs pipeline gets rid of the issue. This is a test cluster where I can reproduce this sending metrics to the hec receiver.
i'm wondering if this is still a continuation of the issue noted in #34886
One clue is that this is all under startlogop - the memory in startmetricsop seems normal - perhaps the way hec events are processed, as events first, is causing these to never end. forgive my speculation :)
Steps to Reproduce
send a ton of metrics to a hec endpoint and profile the memory.
Component(s)
receiver/splunkhec
What happened?
Description
We are still seeing an issue with the collector memory after upgrading to 0.109.0 with the fix. The behavior changed and we are now seeing more memory in the stack vs heap. Although the heap still grows slowly over time. Similar to before, removing the hec receiver from the logs pipeline gets rid of the issue. This is a test cluster where I can reproduce this sending metrics to the hec receiver.
One clue is that this is all under startlogop - the memory in startmetricsop seems normal - perhaps the way hec events are processed, as events first, is causing these to never end. forgive my speculation :)
Steps to Reproduce
send a ton of metrics to a hec endpoint and profile the memory.
Expected Result
memory should not be held in this way
Actual Result
Collector version
0.109.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
No response
Log output
No response
Additional context
We opened splunk case 3554107 as well.