camilb / prometheus-kubernetes

Monitoring Kubernetes clusters on AWS, GCP and Azure using Prometheus Operator and Grafana
Apache License 2.0
672 stars 300 forks source link

Scraping custom HTTPS metrics #113

Closed aviramartac closed 6 years ago

aviramartac commented 6 years ago

Hey,

I was wondering if you could please assist setting up a simple custom scrape in your setup. On a different project I have the following configuration:

How would this work using your project? I know it is using Prometheus Operator, which uses ServiceMonitor, but I'm struggling to understand how to configure and setup a new one of my own.

Thanks

camilb commented 6 years ago

This should work:

apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: example-api
  name: example-api
  namespace: monitoring
spec:
  externalName: example.url.io (or IP address)
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  sessionAffinity: None
  type: ExternalName
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: example-api
  name: example-api
  namespace: monitoring
spec:
  endpoints:
  - honorLabels: true
    interval: 15s
    path: /api/metrics
    port: https
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - monitoring
  selector:
    matchLabels:
      k8s-app: example-api
camilb commented 6 years ago

Make sure the endpoint is created. Should be something similar to this:

apiVersion: v1
kind: Endpoints
metadata:
  labels:
    k8s-app: example-api
  name: example-api
  namespace: monitoring
subsets:
- addresses:
  - ip: example.url.io
  ports:
  - name: https
    port: 443
    protocol: TCP
aviramartac commented 6 years ago

Thanks for the help, I believe I understand and I will try it out. What if I already have an existing internal service, in another namespace, that I want to monitor? It is exposed internally via port 80 in this case (it has an internal endpoint of myapp-backend-internal:80 TCP), and the metrics are still in /api/metrics.

I imagine I would have to do something like this:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: myapp-api
  name: myapp-api
  namespace: myapp-namespace
spec:
  endpoints:
  - honorLabels: true
    interval: 15s
    path: /api/metrics
    port: http
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - myapp-namespace
  selector:
    matchLabels:
      k8s-app: myapp-api-api

Would I then just save it in a file and do "kubectl create -f file.yaml"? How do I verify then that it works, am I supposed to see a new target in the Prometheus UI?

Many thanks for the help

camilb commented 6 years ago

Yes, your example should work and you should see a new target in Prometheus

aviramartac commented 6 years ago

Awesome, thanks. No other step is needed after this? Because I just saw in the Prometheus Operator documentation they have another step to "include" ServiceMonitors: https://coreos.com/operators/prometheus/docs/latest/user-guides/getting-started.html#include-servicemonitors

camilb commented 6 years ago

By default, all ServiceMonitors labeled with k8s-app are configured. If you want to use a different label, then you should also add it in Prometheus object

aviramartac commented 6 years ago

Got it. Thanks for the help, love this project.

aviramartac commented 6 years ago

Sorry I'm reopening because I still haven't gotten this to work. Maybe I'm doing something wrong? This is the service YAML (I only added the "k8s-app: "myapp-backend" label):

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "myapp-backend-internal",
    "namespace": "myapp",
    "selfLink": "/api/v1/namespaces/myapp/services/myapp-backend-internal",
    "uid": "bc88f7a8-85ac-11e8-ae9e-02c44c140928",
    "resourceVersion": "8584852",
    "creationTimestamp": "2018-07-12T08:22:37Z",
    "labels": {
      "app": "myapp-backend",
      "env": "staging",
      "k8s-app": "myapp-backend"
    },
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"myapp-backend\",\"env\":\"staging\"},\"name\":\"myapp-backend-internal\",\"namespace\":\"myapp\"},\"spec\":{\"ports\":[{\"name\":\"http\",\"port\":80,\"targetPort\":9000}],\"selector\":{\"app\":\"myapp-backend\"}}}\n"
    }
  },
  "spec": {
    "ports": [
      {
        "name": "http",
        "protocol": "TCP",
        "port": 80,
        "targetPort": 9000
      }
    ],
    "selector": {
      "app": "myapp-backend"
    },
    "clusterIP": "x.x.x.x",
    "type": "ClusterIP",
    "sessionAffinity": "None"
  },
  "status": {
    "loadBalancer": {}
  }
}

And this is the ServiceMonitor:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: myapp-backend-internal
  name: myapp-backend-internal
  namespace: myapp
spec:
  endpoints:
  - honorLabels: true
    interval: 15s
    path: /metrics
    port: http
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - myapp
  selector:
    matchLabels:
      k8s-app: myapp-backend-internal

When I apply the file nothing gets added to the Prometheus targets. Is it possible that this issue is due to the service using a label selector different then the "k8s-app" one? That's the only reason I could think of. If so, how do I add a new ServiceMonitorSelector to the Prometheus object?

Thanks for your help

camilb commented 6 years ago

Your labels are not consistent. You have k8s-app: myapp-backend in Service and k8s-app: myapp-backend-internal in ServiceMonitor.

camilb commented 6 years ago
  selector:
    matchLabels:
      k8s-app: myapp-backend-internal

should be

  selector:
    matchLabels:
      k8s-app: myapp-backend