kontena / pharos-cluster

Pharos - The Kubernetes Distribution
https://k8spharos.dev/
Apache License 2.0
312 stars 40 forks source link

Add prometheus scraping to CoreDNS #613

Closed swade1987 closed 6 years ago

swade1987 commented 6 years ago

Having the ability to scrape metrics from CoreDNS (a core component of the cluster) by default is really a must have set up in my opinion.

You'd need to add the following to the YAML file deployed to make this possible:

metadata:
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"

Also, you may want to add preferred pod anti-affinity (see below)

affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: k8s-app
                  operator: In
                  values:
                  - coredns
              topologyKey: kubernetes.io/hostname
jnummelin commented 6 years ago

Testing with kube 1.11.3, core-dns seems to get those annotations automatically (from kubeadm):

$ kubectl get node
NAME              STATUS    ROLES     AGE       VERSION
pharos-master-0   Ready     master    2m        v1.11.3
pharos-worker-0   Ready     worker    2m        v1.11.3

$ kubectl get svc -n kube-system kube-dns -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  creationTimestamp: 2018-10-08T21:12:21Z
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: KubeDNS
  name: kube-dns
  namespace: kube-system
  resourceVersion: "228"
  selfLink: /api/v1/namespaces/kube-system/services/kube-dns
  uid: d8b03285-cb3e-11e8-8fa4-8e75cf48e20c
spec:
  clusterIP: 10.96.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    port: 53
    protocol: TCP
    targetPort: 53
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

I guess we could run a merge patch for the kube-dns service "just-in-case" the annotation are missing.

jakolehm commented 6 years ago

Right, I think everything is already set (just tested with a fresh cluster). Let's reopen if we are missing something.