open-telemetry / opentelemetry-collector-contrib

Contrib repository for the OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
2.87k stars 2.24k forks source link

Killed pod metrics still presenting on the Prometheus Exporter #34105

Open necipakca opened 1 month ago

necipakca commented 1 month ago

Component(s)

exporter/prometheus

What happened?

Description

Even I have set the metric_expiration to 1m. Prometheus exporters still presents the old metrics. Even killed pods couple of hour ago.

Collector version

otel/opentelemetry-collector-contrib:0.102.0

Environment information

Environment

K8s

OpenTelemetry Collector configuration

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: collector-deployment
  namespace: my-ns
spec:
  mode: deployment
  podAnnotations:
    sidecar.istio.io/inject: "false"
    prometheus.io/port: "8889"
  replicas: 2
  resources:
    requests:
      memory: "128Mi"
      cpu: "250m"
    limits:
      memory: "1Gi"
      cpu: "1"
  config:
    receivers:
      otlp:
        protocols:
          grpc: {}
          http: {}
    processors:
      transform/drop:
        trace_statements:
          - context: span
            statements:
              - delete_key(resource.attributes, "process.command_args")
      memory_limiter:
        check_interval: 1s
        limit_percentage: 80
        spike_limit_percentage: 20
      batch: {}
      filter/drop_actuator:
        error_mode: ignore
        traces:
          span:
          - attributes["net.host.port"] == 9001
    connectors:
      spanmetrics:
        events:
          enabled: true
          dimensions:
            - name: exception.type
            - name: exception.message
    exporters:
      debug:
        verbosity: detailed
      otlp/jaeger:
        endpoint: "jaeger-collector.jaeger.svc.cluster.local:4317"
        tls:
          insecure: true
      prometheus:
        endpoint: "0.0.0.0:8889"
        metric_expiration: 80s
        enable_open_metrics: true
        add_metric_suffixes: true
        send_timestamps: true
        resource_to_telemetry_conversion:
          enabled: true
    extensions:
      health_check: {}
    service:
      telemetry:
        logs:
          level: "info"
      extensions: [health_check]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, transform/drop, filter/drop_actuator, batch]
          exporters: [spanmetrics, otlp/jaeger]
        metrics:
          receivers: [spanmetrics]
          processors: [memory_limiter, batch]
          exporters: [prometheus]

Log output

No response

Additional context

No response

github-actions[bot] commented 1 month ago

Pinging code owners:

dashpole commented 1 day ago

Can you use the debug exporter to confirm that you aren't still receiving the metrics in question?

dashpole commented 1 day ago

@jmichalek132