grafana / alloy

OpenTelemetry Collector distribution with programmable pipelines
https://grafana.com/oss/alloy
Apache License 2.0
1.25k stars 156 forks source link

Tracking: Address Clustering Issues #784

Open thampiotr opened 4 months ago

thampiotr commented 4 months ago

Request

There are a few issues that users report and we're observing, that can lead to data issues: 1) there can be gaps in metrics under some circumstances when instances join cluster, 2) there can be elevated errors and alerts when writing to TSDB in some cases, 3) there can be duplicated metrics in other cases.

The extent of these issues is not high, but since these are potential data loss issues, we want to address them and fully understand the problem.

Use case

Sending data should not be dropped.

### Tasks
- [ ] https://github.com/grafana/alloy/issues/249
- [ ] https://github.com/grafana/agent/pull/6792
- [x] Review in detail the errors when remote writing and identify the most common ones
- [ ] https://github.com/prometheus/prometheus/pull/14326
- [x] Add SR on dashboards https://github.com/grafana/alloy/pull/1100
- [x] Verify dashboards latest changes
- [x] Create an issue to track the known OOO problems.
- [x] Review reports of duplicate samples errors
- [x] Investigate resizing causing out-of-order errors
- [x] Review reports of gaps in metrics
- [x] Reduce spammy logging relatd to clustering: https://github.com/grafana/alloy/blob/881c4b7cf72d0a9c068cca04225fac042e6e4714/internal/service/cluster/cluster.go#L220
- [ ] https://github.com/grafana/alloy/pull/1261
- [ ] https://github.com/grafana/alloy/issues/1208
- [ ] https://github.com/grafana/alloy/issues/1009
- [ ] [stretch] explore options for clustering support for push-based workflows
- [ ] [stretch] New highly-available cluster architecture POC
- [ ] [new/stretch] don't admit traffic until cluster is sufficient size: https://github.com/grafana/alloy/issues/201
- [ ] [stretch] Fine-grained component scheduling: https://github.com/grafana/alloy/issues/399
- [ ] [follow-up] make cluster improvements GA: https://github.com/grafana/alloy/issues/1274
- [ ] [follow-up] verify we address this: https://github.com/grafana/alloy/issues/1349
diguardiag commented 4 months ago

I am noticing this behaviour on a k8s cluster (~1800 pods), with an alloy cluster of 3, Istio present, and pod autodiscovery enabled.

We're experiencing:

gowtham-sundara commented 4 months ago

I noticed an issue today where one of the pods fell out of the clustering, it's present in discovery but none of the alloy pods actually scrape them. This didn't go away over a long period of time so I am not sure if it's related to #1 mentioned.

christopher-wong commented 3 months ago

This is my Helm configuration for my deployment of Alloy (24 nodes, so 24 Alloy pods). When I enable clustering, alloy.clustering.enabled = true, metrics stop being scraped altogether.

alloy:
  configMap:
    content: |-
      prometheus.remote_write "default" {
        endpoint {
          url = "http://mimir-gateway.monitoring.svc:80/api/v1/push"
        }
      }

      prometheus.operator.servicemonitors "services" {
        forward_to = [prometheus.remote_write.default.receiver]

        clustering {
          enabled = true
        }
      }

      prometheus.operator.podmonitors "pods" {
        forward_to = [prometheus.remote_write.default.receiver]

        clustering {
          enabled = true
        }
      }
  clustering:
    enabled: false      
  resources:
    requests:
      cpu: 100m
      memory: 2Gi
    limits:
      cpu: 1.5
      memory: 12Gi
configReloader:
  resources:
    requests:
      cpu: "1m"
      memory: "5Mi"
    limits:
      cpu: 10m
      memory: 10Mi
itjobs-levi commented 3 months ago

I have Alloy Agent replica 3 ea (CPU: 1000m / Memory: 4Gi) Enable clustering mode (both scrape exporters) Configure to scrape the unix exporter and process exporter on about 200 servers at one-minute intervals. When scraping, many errors such as err-mimir-duplicate-label-names occur in mimir. In the Grafana mimir document, err-mimir-duplicate-label-names appears to be a problem caused by existing records. I think this is caused by cluster splitting job load balancing.

The first is that this feels like a load, is that correct? If it's not a secondary load, is it possible to turn off these logs?

thampiotr commented 3 months ago

@itjobs-levi @christopher-wong @gowtham-sundara @diguardiag could you open issues for these and provide clear steps to reproduce? These may need to be looked into separately.