grafana / alloy

OpenTelemetry Collector distribution with programmable pipelines
https://grafana.com/oss/alloy
Apache License 2.0
1.45k stars 216 forks source link

node exporter network metrics values mismatch #1224

Open ashwinisivakumar opened 4 months ago

ashwinisivakumar commented 4 months ago

What's wrong?

We are running a node exporter as standalone and scraping those metrics using Prometheus and sending it to mimir and we are running alloy with integrated node exporter sending those metrics to mimir

we did comparison between the node exporter standalone metrics and alloy integrated node exporter metrics we found a difference in the network related metrics values all other CPU, memory and disk metrics values are matching in both case. image (1)

Steps to reproduce

run node exporter as a standalone application and scrape it run alloy integrated node exporter as one set

System information

No response

Software version

Node exporter version - 1.8.1 Alloy version - 1.0

Configuration

/Node exporter
    prometheus.exporter.unix "integrations_node_exporter" {
      include_exporter_metrics = true
    }
    discovery.relabel "integrations_node_exporter" {
      targets = prometheus.exporter.unix.integrations_node_exporter.targets
      rule {
        target_label = "job"
        replacement  = "integrations/node_exporter"
      }
    }
    prometheus.scrape "integrations_node_exporter" {
      targets    = discovery.relabel.integrations_node_exporter.output
      forward_to = [prometheus.relabel.integrations_node_exporter.receiver]
      job_name   = "integrations/node_exporter"
    }
    prometheus.relabel "integrations_node_exporter" {
      forward_to = [prometheus.remote_write.prom_receiver.receiver]
      rule {
        source_labels = ["__name__"]
        regex         = "node_(arg_|cooling_device_|cpu_|disk_|entropy_|filefd_|filesystem_|hwmon_temp_|memory_|netstat_Tcp_|netstat_Ip_|netstat_TcpExt_|netstat_Udp_|netstat_UdpLite_|network_receive_|network_transmit_|nf_conntrack_|schedstat_|sockstat_TCP_|sockstat_UDPLITE_|softnet_|systemd_socket_|textfile_|timex_estimated_|timex_loop_|timex_maxerror_|timex_offset_|timex_sync_|timex_tai_|timex_tick_|boot_time_|procs_|forks_|context_switches_|vmstat_).*|node_(load1|load5|load15|intr_total|time_seconds)"
        action        = "keep"
      }
      rule {
        source_labels = ["__name__"]
        regex         = "(go_|node_xfs_|node_timex_|node_power_supply|node_nf_conntrack_|node_cooling_|node_scrape_collector_|loki_chunk_|node_export_build_|node_dmi_|node_exporter_build_).*"
        action        = "drop"
        }      
    }
    prometheus.remote_write "prom_receiver" {
      endpoint {
        url  = ""
        headers = {
          "X-Scope-OrgID" = "",}

      queue_config {
        min_shards = 1
        max_shards = 5
        max_samples_per_send = 5000
        batch_send_deadline = "60s"
        min_backoff = "5s"
        max_backoff = "30s"
        sample_age_limit = "300s"
      }
      }
      wal {
        truncate_frequency = "2h"
        min_keepalive_time = "5m"
        max_keepalive_time = "8h"
      }

      external_labels = {

      }
    }

Logs

No response

github-actions[bot] commented 3 months ago

This issue has not had any activity in the past 30 days, so the needs-attention label has been added to it. If the opened issue is a bug, check to see if a newer release fixed your issue. If it is no longer relevant, please feel free to close this issue. The needs-attention label signals to maintainers that something has fallen through the cracks. No action is needed by you; your issue will be kept open and you do not have to respond to this comment. The label will be removed the next time this job runs if there is new activity. Thank you for your contributions!