Closed dineshg13 closed 8 months ago
This is resolved via feature gate. See datadog connector readme.
Hi,
Wer'e still seeing memory issues, even with the feature gate enabled.
Hi @grzn,
Can you please give me an example on how do you enable the feature gate? I was looking for an example on how to do that but I didn't find anything.
Hi,
Wer'e still seeing memory issues, even with the feature gate enabled.
Same for us. We've reported our issue directly to DataDog.
It's a command-line parameter to the binary.
We were in v0.7something and it was all good. Now we're trying 0.92 and it's leaking. Going to try 0.82 which is the last version before the processor refactor.
@grzn We didn't have success with the deprecated processor because it does not support computing stats by peer service and span kind.
Once we enable it, we lost the ability to see metrics for inferred services.
I'll push this through our DataDog channels as well.
Cross-referencing to https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/30828
Component(s)
connector/datadog
What happened?
Description
Customers using Datadog connector at scale have reported Collector memory issues. We are able to replicate the issue with the help of trace dump . The collector using Datadog connector increases memory and OOMs within a few minutes of starting.
Steps to Reproduce
Use the collector config and send the traces down the pipe.
Expected Result
Collector shouldn't OOM.
Actual Result
Collector memory and CPU spike and we are unable to use Datadog Connector at scale.
Collector version
v0.91.0
Environment information
Environment
Latest GKE cluster.
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response