Open mterhar opened 3 weeks ago
Rigged up docker compose also to be clearer about the output. This is the resource section in the console output.
otel-col | "Resource": [
otel-col | {
otel-col | "Key": "service.instance.id",
otel-col | "Value": {
otel-col | "Type": "STRING",
otel-col | "Value": "c0679388-2ab5-4339-a07d-47e91fd33e36"
otel-col | }
otel-col | },
otel-col | {
otel-col | "Key": "service.name",
otel-col | "Value": {
otel-col | "Type": "STRING",
otel-col | "Value": "otelcol-k8s"
otel-col | }
otel-col | },
otel-col | {
otel-col | "Key": "service.version",
otel-col | "Value": {
otel-col | "Type": "STRING",
otel-col | "Value": "0.107.0"
otel-col | }
otel-col | }
otel-col | ],
otel-col | "ScopeMetrics": [ ... ]
I also tried overriding the service name and it didn't have any impact either.
services:
otelcol:
image: otel/opentelemetry-collector-k8s:0.107.0
container_name: otel-col
environment:
- OTEL_RESOURCE_ATTRIBUTES=pod_ip=docker-compose
- OTEL_SERVICE_NAME=override_env
command:
[
"--config=/etc/otelcol-config.yml",
"--feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry",
]
volumes:
- ./otlptelemetry-collector-config.yaml:/etc/otelcol-config.yml
ports:
- "4317" # OTLP over gRPC receiver
I agree we should support this.
@mterhar for a work around, what happens if you do:
service:
telemetry:
resource:
pod_ip: "my favorite ip"
That does work for a work-around but the helm chart doesn't expose a pod-name within the container which makes alerts a bit tough to map back to reality.
For now you can use env vars and downard API. I believe something like this will work:
mode: deployment
image:
repository: otel/opentelemetry-collector-k8s
extraEnvs:
- name: K8s_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
config:
service:
telemetry:
resource:
k8s.pod.name: ${env:K8s_POD_NAME}
I can give it a try if @codeboten doesn't mind :)
@iblancasa sure go for it!
Describe the bug
When enabling the
telemetry.useOtelWithSDKConfigurationForInternalTelemetry
feature gate and configuring an exporter to send telemetry using OTLP rather than Prometheus, it doesn't seem to include the resource attributes set by the OTEL_RESOURCE_ATTRIBUTES environment variable.Steps to reproduce
Using these in the helm chart:
It renders to a container spec that looks correct:
Emitted metrics do not seem to have the resource attribute added.
What did you expect to see?
The metrics that show up at the OTLP target have a resource attribute called
pod_ip
What did you see instead?
The metrics that show up at the OTLP target have no such resource attribute.
What version did you use?
contrib 0.107.0
What config did you use?
Environment
Kubernetes is Amazon EKS: v1.30.2-eks-1552ad0
Additional context