Open SoerenHenning opened 1 week ago
I also get the same error when setting up with mode "deployment", hence sounds more general.... This is blocking my use of the dynatrace otelcol for now...
confirming that 0.14.0 starts without errors
The issue is cause by inconsistent state of the feature gate used here. Since in the dynatrace-otel-collector we use components from 2 repositories (core and contrib), each of them have a different state of the feature gate, which I consider a bug, already reported here.
The fix for now is to not use the feature gate "-component.UseLocalHostAsDefaultHost
at all and adapt your configuration to use 0.0.0.0
instead of localhost
in your collector configuration
We will try to fix inconsistency issues ASAP in upstream
The PR just got merged, therefore after the next upstream release, we can proceed with dynatrace-otel-colectro release. Afterwards the feature gate will be stable in all components and therefore it won't be possible to disable or use it anymore
Describe the bug
With the latest release (v0.15.0), I get the following error in all OTel collector pods when deploying with the OpenTelemetry Operator:
My deployment is mostly aligned with the Dynatrace OTel collector deployment docs, with the main difference that I use
mode: "daemonset"
(see below). With v0.14.0 everything works fine.I assume this is related to https://github.com/open-telemetry/opentelemetry-operator/issues/3306.
To Reproduce
I deploy the Dynatrace OTel Collector with the OpenTelemetry Operator according to the steps described in Dynatrace OTel collector deployment docs with Deploy as an agent (DaemonSet). My
OpenTelemetryCollector
manifest looks like this:Expected behavior I expected that on each Kubernetes node a pod
dynatrace-otel-collector-...
is successfully started.Additional context I am using Kubernetes version 1.29 in AWS EKS.