Open ivalba opened 5 months ago
Is using an OTel Collector on the same pod an acceptable solution? You may also be interested in https://github.com/open-telemetry/opentelemetry-operator.
@pellared in my scenario is not a suitable solution, I'm leveraging an automated mechanism in my platform where the collector is running on a separate pod, we would like to keep it as much automated as possible
I was wondering if there is some tiny mistake on the script definition in OTEL_RESOURCE_ATTRIBUTES that prevents the container.id to be reached and gives me the script itself instead
we would like to keep it as much automated as possible
Why would usage of https://github.com/open-telemetry/opentelemetry-operator be not (or less) automated?
Would it be possible to add some of the existing resource detectors into the auto instrumentation, e.g. some of this code:
being added here:
Describe the bug
I'm using auto instrumentation in an minikube cluster running 1 pod (3 replicas), each pod has 2 containers: 1) simple go http server 2) auto instrumentation container
Traces are working fine except for container.id (I need it for infra correlation in the platform that I'm using (Cisco Cloud Observability))
I'm trying to use OTEL_RESOURCE_ATTRIBUTES to get the container.id through a script recalling /proc/self/mountinfo but I can't get the id. Running the command inside the container shell works fine.
this is my deployment YAML:
The result is that I can see correctly all the traces, but instead of the container.id I get the script code itself:
This is my DockerFile:
And this is my main.go
Environment
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Seeing the container.id coming from the auto instrumented golang application container