prometheus-community / elasticsearch_exporter

Elasticsearch stats exporter for Prometheus
Apache License 2.0
1.88k stars 784 forks source link

No data in #883

Open ryebridge opened 2 months ago

ryebridge commented 2 months ago

I have installed the helm chart from the Kube Prometheus stack here:

https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md

....then I added:

https://github.com/prometheus-community/elasticsearch_exporter

...and updated the elastic-prometheus-elasticsearch-exporter deployment with the following options:

    - --log.format=logfmt
    - --log.level=info
    - --es.uri=https://admin:admin@opensearch-logs-data.helix-platform:9200
    - --es.all
    - --es.indices
    - --es.ssl-skip-verify
    - --es.indices_settings
    - --es.indices_mappings
    - --es.shards
    - --collector.snapshots
    - --es.timeout=30s
    - --web.listen-address=:9108
    - --web.telemetry-path=/metrics
    image: quay.io/prometheuscommunity/elasticsearch-exporter:v1.7.0

...and when I check the pod logs it seems to be collecting data:

level=info ts=2024-04-04T09:37:36.260266299Z caller=clusterinfo.go:214 msg="triggering initial cluster info call" level=info ts=2024-04-04T09:37:36.260317077Z caller=clusterinfo.go:183 msg="providing consumers with updated cluster info label" level=info ts=2024-04-04T09:37:36.271143372Z caller=main.go:244 msg="started cluster info retriever" interval=5m0s level=info ts=2024-04-04T09:37:36.271525105Z caller=tls_config.go:274 msg="Listening on" address=[::]:9108 level=info ts=2024-04-04T09:37:36.271545007Z caller=tls_config.go:277 msg="TLS is disabled." http2=false address=[::]:9108 level=info ts=2024-04-04T09:42:36.260458556Z caller=clusterinfo.go:183 msg="providing consumers with updated cluster info

....but when I log into Promethus, I can't see anything related to elastic. Am I missing some additional configuruation ?

Thanks for any tips in advance.

Regards, John

sysadmind commented 2 months ago

I suspect what is happening here is that your prometheus is not configured to scrape the exporter. Some things you can check:

ryebridge commented 2 months ago

Thanks so much for replying, I've been trying to to this to work for a few days now. Can you please explain how I could check the /metrics endpoint on the exporter ?

I logged into the "kube-prometheus-stack-grafana" pod and did a curl against the "pgexporter-prometheus-postgres-exporter" service IP address which accrording to the env variable KUBE_PROMETHEUS_STACK_KUBE_STATE_METRICS_PORT_8080_TCP_PORT is 8080 but it's not able to connect at all.

I can't see the exporter as a target in the dashboard at all, this is what I have for the prometheuses.monitoring.coreos.com CRD:

scrapeConfigSelector:
  matchLabels:
    release: kube-prometheus-stack

...do I need to create a "ScrapeConfig" it suggests here ? I don't see any ScrapeConfig objects in the "kube-prometheus-stack" namespace ?

https://medium.com/@helia.barroso/a-guide-to-service-discovery-with-prometheus-operator-how-to-use-pod-monitor-service-monitor-6a7e4e27b303

sysadmind commented 2 months ago

For your first question about checking the exporter pod: I think you're conflating the connection to prometheus with the connection to the exporter. The environment variable you mention is for kube-prometheus-stack, not the elasticsearch-exporter. In the command args you originally mention as well as the logs from the exporter, the exporter is listening on port 9108. I think you want something similar to this: curl pgexporter-prometheus-postgres-exporter:9108/metrics

For the scrape configs, I think I linked you to the wrong section. Try here: https://prometheus-operator.dev/docs/user-guides/getting-started/. That talks about using a ServiceMonitor to monitor a kubernetes service. There is also a PodMonitor if you don't have a service.

Here's an example:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: example-app
  labels:
    team: frontend
spec:
  selector:
    matchLabels:
      app: example-app
  endpoints:
  - port: web
ryebridge commented 2 months ago

Thanks Joe,

Really appreciate you helping me out here, I must be missing a step :-(

I'm getting a little confused between exporters and service monitors. I intially tried to set up a service monitor against the elastic service but coudn't see any metrics in Prometheus so I assumed the alternative was to use an exporter and configure the connection in the deployment.

Here's my first attempt using a service monitor against the elastic service.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    meta.helm.sh/release-namespace: kube-prometheus-stack
  labels:
    app: prometheus
    release: prometheus
  name: my-es-monitor
  namespace: kube-prometheus-stack
spec:
  endpoints:
  - interval: 30s
    path: /metrics
    scrapeTimeout: 20s
    targetPort: 9108
  namespaceSelector:
    matchNames:
    - my-platform
  selector:
    matchLabels:
      app.kubernetes.io/name: opensearch

kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: opensearch-logs-data
  labels:
    app.kubernetes.io/component: opensearch-logs-data
    app.kubernetes.io/instance: opensearch-logs-data
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: opensearch
    app.kubernetes.io/version: 1.3.13
    helm.sh/chart: opensearch-1.23.1
  name: opensearch-logs-data
  namespace: my-platform
spec:
  clusterIP: xx.xx.xxx.xxx
  clusterIPs:
  - xx.xx.xxx.xxx
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 9200
    protocol: TCP
    targetPort: 9200
  - name: transport
    port: 9300
    protocol: TCP
    targetPort: 9300
  selector:
    app.kubernetes.io/instance: opensearch-logs-data
    app.kubernetes.io/name: opensearch
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
sysadmind commented 2 months ago

So you have prometheus - this is your database. It stores the metrics and you can query it. It also scrapes metrics from exporters.

Exporters - these are things that expose metrics. They are often translators of data. In this case elasticsearch_exporter takes data from elasticsearch and exposes it as prometheus metrics. By itself the exporter only exposes metrics over HTTP(s).

The kube-prometheus-stack glues a bunch of stuff together to make many pieces work together. The service monitor is a way to tell prometheus about kubernetes services to monitor.

What you have in your last comment looks okay to me, but I'm not an expert. If you still don't have the target in prometheus, it's probably something with the config for kube-prometheus-stack. I think this is the repo for that: https://github.com/prometheus-operator/prometheus-operator

You could also try the #prometheus channel in the CNCF slack. That might be more fruitful for kube-prometheus-stack issues.

ryebridge commented 2 months ago

Thanks again, I'll give that a try :)

ryebridge commented 2 months ago

Hey again, I exposed the elastic exporter service as a nodePort service and confirmed it's getting the metrics but I still can't get them into Promtheus :-( From reading futher, seems ServiceMonitor is required to avoid having to manually add a new scrape into the Prometheus config.