Open shrutichy91 opened 1 month ago
Pinging code owners:
receiver/prometheus: @Aneurysm9 @dashpole
See Adding Labels via Comments if you do not have permissions to add labels yourself.
Is the metrics port exposed on the scheduler pod? You shouldn't need a service if the scheduler is running in-cluster.
In our environment, this is reproducible with build 0.111.0 and not reproducible with 0.110.0.
Component(s)
receiver/prometheus
Describe the issue you're reporting
I have a 3 node k8s cluster. I am using otel as daemonset with the following config:
extensions:
The health_check extension is mandatory for this chart.
processors:
receivers:
exporters: logging: {} prometheusremotewrite: endpoint: "xxxxxxx" resource_to_telemetry_conversion: enabled: true tls: insecure: true auth: authenticator: bearertokenauth
service: telemetry: metrics: address: ${env:MY_POD_IP}:8888 logs: level: debug extensions:
bearertokenauth pipelines:
metrics: exporters:
I get the below error.
2024-10-23T12:45:56.402Z debug scrape/scrape.go:1331 Scrape failed {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_pool": "kube-scheduler", "target": "https://100.xx.xx.xx:10259/metrics", "error": "Get "https://100.xx.xx.xx:10259/metrics\": dial tcp 100.xx.xx.xx:10259: connect: connection refused"}
I have the kube controller as three pods running on one node each on a 3 node cluster in the kube-system namespace. DO I need a k8s service of type nodeport to get this to work?
I tried to login to the node, and run the curl -kvv https://100.xx.xx.xx:10259/metrics, I get connection refused, but it does work with curl -kvv https://localhost:10259/metrics