We've tried to increase the collection_interval parameter for the receivers.awscontainerinsight component to optimize AWS CloudWatch costs.
I've figured, that it is related to the TTL in the map used to store metric deltas: when the collection interval is more than 5 minutes, collecting deltas breaks because older deltas get removed before new deltas are applied.
Increasing the cleanInterval to 15 minutes helps.
Steps to Reproduce
Create any EKS cluster
Install OTEL to collect AWS Container Insights
Set receivers.awscontainerinsightreceiver.collection_interval to 600s
Restart the daemonset
Wait for 15-20 minutes
Expected Result
Log events in CloudWatch contain CPU usage metrics
Actual Result
Log events in CloudWatch do not contain CPU usage metrics
Component(s)
receiver/awscontainerinsight
What happened?
Description
We've tried to increase the collection_interval parameter for the
receivers.awscontainerinsight
component to optimize AWS CloudWatch costs.I've figured, that it is related to the TTL in the map used to store metric deltas: when the collection interval is more than 5 minutes, collecting deltas breaks because older deltas get removed before new deltas are applied.
Increasing the
cleanInterval
to 15 minutes helps.Steps to Reproduce
Expected Result
Log events in CloudWatch contain CPU usage metrics
Actual Result
Log events in CloudWatch do not contain CPU usage metrics
Collector version
0.41.1
Environment information
Environment
OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Log output
No response
Additional context
Log event with collection_interval == 600s:
Log event with the default configuration: