open-telemetry / opentelemetry-collector-contrib

Contrib repository for the OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
3.08k stars 2.37k forks source link

kubeletstats metrics have no labels. K8s_node_name only shows in target_info metric #27839

Closed yairt2 closed 1 year ago

yairt2 commented 1 year ago

Component(s)

kubeletstats receiver

What happened?

Description: I'm trying to work with the kubeletstats receiver as a replacement for node exporter. All the metrics are being sent to a victoriametrics cluster as a prometheus remote write exporter. The opentelemtery is deployed as a daemon set on my kubernetes cluster. For some reason though i get the metrics, each metric only shows up once (as if they are being accumulated) and the only metric that shows the label k8s_node_name is the metric target_info.

Expected Result: all metrics are tagged with the k8s_node_name

Actual Result: only one metric is tagged "target_info"

Collector version

v0.86.0

Environment information

kubernetes

OpenTelemetry Collector configuration

exporters:
  prometheusremotewrite:
    endpoint:  http://victoriametrics-victoria-metrics-cluster-vminsert.monitoring.svc.cluster.local:8480/insert/0/prometheus/api/v1/write
extensions:
  file_storage:
    directory: /var/lib/otelcol
  health_check: {}
  memory_ballast:
    size_in_percentage: 40
processors:
  batch:
    send_batch_max_size: 8000
    send_batch_size: 8000
    timeout: 0s
  k8sattributes:
    auth_type: serviceAccount
    extract:
      metadata:
      - k8s.namespace.name
      - k8s.pod.name
      - k8s.pod.hostname
      - k8s.container.name
      - container.image.name
      - container.image.tag
      - container.id
      - k8s.deployment.name
      - k8s.statefulset.name
      - k8s.statefulset.name
      - k8s.daemonset.name
      - k8s.job.name
      - k8s.cronjob.name
    filter:
      node_from_env_var: ${K8S_NODE_NAME}
    passthrough: true
    pod_association:
    - sources:
      - from: resource_attribute
        name: k8s.pod.ip
    - sources:
      - from: resource_attribute
        name: k8s.pod.uid
    - sources:
      - from: connection
  memory_limiter:
    check_interval: 1s
    limit_mib: 1500
    spike_limit_mib: 800
  resource:
    attributes:
    - action: insert
      from_attribute: service.name
      key: job
    - action: upsert
      from_attribute: k8s.daemonset.name
      key: service.name
    - action: upsert
      from_attribute: k8s.replicaset.name
      key: service.name
    - action: upsert
      from_attribute: k8s.statefulset.name
      key: service.name
    - action: upsert
      from_attribute: k8s.job.name
      key: service.name
    - action: upsert
      from_attribute: k8s.cronjob.name
      key: service.name
    - action: insert
      key: collector.name
      value: ${KUBE_POD_NAME}
    - action: upsert
      key: k8s.cluster.name
      value: dev-env
  resource/standard:
    attributes:
    - action: upsert
      key: ClusterName
      value: clustername
    - action: upsert
      key: node_name
      value: ${K8S_NODE_NAME}
receivers:
  hostmetrics:
    collection_interval: 10s
    root_path: /hostfs
    scrapers:
      cpu: null
      disk: null
      filesystem:
        exclude_fs_types:
          fs_types:
          - autofs
          - binfmt_misc
          - bpf
          - cgroup2
          - configfs
          - debugfs
          - devpts
          - devtmpfs
          - fusectl
          - hugetlbfs
          - iso9660
          - mqueue
          - nsfs
          - overlay
          - proc
          - procfs
          - pstore
          - rpc_pipefs
          - securityfs
          - selinuxfs
          - squashfs
          - sysfs
          - tracefs
          match_type: strict
        exclude_mount_points:
          match_type: regexp
          mount_points:
          - /dev/*
          - /proc/*
          - /sys/*
          - /run/k3s/containerd/*
          - /var/lib/docker/*
          - /var/lib/kubelet/*
          - /snap/*
          - /hostfs/run/containerd/*
      load: null
      memory: null
      network: null
      paging: null
  kubeletstats:
    auth_type: serviceAccount
    collection_interval: 20s
    endpoint: "https://${env:K8S_NODE_NAME}:10250"
    insecure_skip_verify: true
  prometheus:
    config:
      scrape_configs:
      - job_name: opentelemetry-collector
        scrape_interval: 10s
        static_configs:
        - targets:
          - ${MY_POD_IP}:8888
service:
  extensions:
  - health_check
  - memory_ballast
  - file_storage
  pipelines:
    metrics:
      exporters:
      - prometheusremotewrite
      processors:
      - k8sattributes
      - resource/standard
      - memory_limiter
      - batch
      receivers:
      - kubeletstats
  telemetry:
    logs:
      level: debug
    metrics:
      address: 0.0.0.0:8888

Log output

No response

Additional context

No response

github-actions[bot] commented 1 year ago

Pinging code owners for receiver/kubeletstats: @dmitryax @TylerHelmuth. See Adding Labels via Comments if you do not have permissions to add labels yourself.

yantingqiu commented 1 year ago

Your prometheusremotewrite exporters are missing configuration.

exporters:
  prometheusremotewrite:
    endpoint:  http://victoriametrics-victoria-metrics-cluster-vminsert.monitoring.svc.cluster.local:8480/insert/0/prometheus/api/v1/write

change to

exporters:
  prometheusremotewrite:
    endpoint:  http://victoriametrics-victoria-metrics-cluster-vminsert.monitoring.svc.cluster.local:8480/insert/0/prometheus/api/v1/write
    resource_to_telemetry_conversion:
       enabled: true

reference docs: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusremotewriteexporter

yairt2 commented 1 year ago

Fantastic, that did the job. Thank you very much!