Open thampiotr opened 4 months ago
I am noticing this behaviour on a k8s cluster (~1800 pods), with an alloy cluster of 3, Istio present, and pod autodiscovery enabled.
We're experiencing:
I noticed an issue today where one of the pods fell out of the clustering, it's present in discovery but none of the alloy pods actually scrape them. This didn't go away over a long period of time so I am not sure if it's related to #1 mentioned.
This is my Helm configuration for my deployment of Alloy (24 nodes, so 24 Alloy pods). When I enable clustering, alloy.clustering.enabled = true
, metrics stop being scraped altogether.
alloy:
configMap:
content: |-
prometheus.remote_write "default" {
endpoint {
url = "http://mimir-gateway.monitoring.svc:80/api/v1/push"
}
}
prometheus.operator.servicemonitors "services" {
forward_to = [prometheus.remote_write.default.receiver]
clustering {
enabled = true
}
}
prometheus.operator.podmonitors "pods" {
forward_to = [prometheus.remote_write.default.receiver]
clustering {
enabled = true
}
}
clustering:
enabled: false
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 1.5
memory: 12Gi
configReloader:
resources:
requests:
cpu: "1m"
memory: "5Mi"
limits:
cpu: 10m
memory: 10Mi
I have Alloy Agent replica 3 ea (CPU: 1000m / Memory: 4Gi) Enable clustering mode (both scrape exporters) Configure to scrape the unix exporter and process exporter on about 200 servers at one-minute intervals. When scraping, many errors such as err-mimir-duplicate-label-names occur in mimir. In the Grafana mimir document, err-mimir-duplicate-label-names appears to be a problem caused by existing records. I think this is caused by cluster splitting job load balancing.
The first is that this feels like a load, is that correct? If it's not a secondary load, is it possible to turn off these logs?
@itjobs-levi @christopher-wong @gowtham-sundara @diguardiag could you open issues for these and provide clear steps to reproduce? These may need to be looked into separately.
Request
There are a few issues that users report and we're observing, that can lead to data issues: 1) there can be gaps in metrics under some circumstances when instances join cluster, 2) there can be elevated errors and alerts when writing to TSDB in some cases, 3) there can be duplicated metrics in other cases.
The extent of these issues is not high, but since these are potential data loss issues, we want to address them and fully understand the problem.
Use case
Sending data should not be dropped.