Closed kenfinnigan closed 4 months ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
I suspect the setup/teardown of the watches isnt quite right.
That's what I was thinking as well, but not sure how to debug it further to narrow down the issue.
Any suggestions @TylerHelmuth? (Bearing in mind I'm reasonably new to Golang)
Hopefully it is somewhere in https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/k8sobjectsreceiver/receiver.go
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
Component(s)
receiver/k8sobjects
What happened?
Description
Lumigo has an operator which uses the OTeL Collector builder to construct an instance of the collector for in cluster collection and processing before transmitting traces/logs/etc to a backend.
The operator tracks which namespaces need to be monitored, and causes the OTeL Collector instance to be rebuilt when namespaces are added or removed. For each namespace being monitored, it defines the
k8sobjectsreceiver
with k8s objects and events for a specific set of object kinds (Pod/Deployment/etc).When a namespace is removed from monitoring, and the Collector is restarted with any receiver config for that namespace removed, there are large numbers of errors in the Collector logs about API requests being throttled.
Steps to Reproduce
telemetry-proxy
container (the internal Collector instance) logskubectl delete -n {ns} lumigo lumigo
telemetry-proxy
container was restarted from the logsExpected Result
Collector is no longer retrieving k8s events and objects for the namespace.
Actual Result
Errors in the log for kube api calls to the no longer monitored namespace being throttled:
Collector version
v0.89.0
Environment information
Environment
OS: Alpine
OpenTelemetry Collector configuration
Log output
Additional context
No response