Closed jakubdyszkiewicz closed 2 years ago
The problem is that those annotations are placed on Pod and there is support only for 1 value which means that scraping metrics from both a dataplane and an application is not possible.
Could you please attach a link to a resource where it's documented ? Or maybe Prometheus source code
Here is the issue https://github.com/prometheus/prometheus/issues/3756
The second problem is that those annotations are placed by injector therefore after turning the metrics on, you have to redeploy for pods.
It's possible to edit labels and annotations on Pods
without a restart, e.g. with kubectl label
and kubectl annotate
.
We could use it to bypass this limitation.
Kubernetes SD
Cons:
- There is no information about mesh since Pod does not know about it
It looks solvable
Kubernetes SD
Cons:
- You have to redeploy Pod to apply it (it relies on extra annotations that injector would add)
It looks solvable (see comment above)
In general, I think that
kuma-prometheus-sd
-based solution on kubernetes
and recommend it as a default approachprometheus.io
annotations and improve them once there is a demand for that Kuma Prometheus SD
Pros:
- Information about the mesh
Notice that for k8s
users it's important to have namespace
label and, probably, name
cleaned up from .namespace
suffix (or maybe a separate label like pod_name
)
I managed to get kuma-prometheus-sd
working with a Prometheus
instance from Prometheus Operator. Here is the example YAML for a Prometheus
instance. Please note that depending on where you deploy it you may need to create the correct RBAC bindings and service account.
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
app: prom-operator-prometheus
chart: prometheus-operator-8.5.1
name: prometheus-kuma
namespace: kuma-metrics
spec:
additionalScrapeConfigs:
key: additional-scrape-configs.yaml
name: kuma-scrape-confg
alerting:
alertmanagers:
- name: alertmanager
namespace: monitoring
pathPrefix: /
port: web
baseImage: prometheus/prometheus
version: v2.18.2
enableAdminAPI: false
externalUrl: http://< k8s service address >.kuma-metrics:9090
listenLocal: false
logFormat: json
logLevel: info
nodeSelector:
kubernetes.io/os: linux
paused: false
podMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
kuma-pod-monitor: enabled
portName: web
replicas: 1
resources:
requests:
cpu: 300m
memory: 400Mi
retention: 7d
retentionSize: 90GB
routePrefix: /
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
kuma-rules: enabled
securityContext:
fsGroup: 2000
runAsNonRoot: false
runAsUser: 1000
serviceAccountName: prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
kuma-service-monitor: enabled
storage:
volumeClaimTemplate:
selector: {}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: default
podMetadata:
labels:
kuma.io/dataplane-metrics: enabled
annotations:
kuma.io/mesh: default
kuma.io/sidecar-injection: enabled
kuma.io/gateway: disabled
kuma.io/virtual-probes: disabled # Prometheus has named ports so we have to disable this for now
kuma.io/direct-access-services: "*"
traffic.kuma.io/exclude-inbound-ports: "9090" # Prometheus Readiness
traffic.kuma.io/exclude-outbound-ports: "443,9093" # k8s API Server, AlertManager
containers:
- name: kuma-prometheus-sd
image: kong-docker-kuma-docker.bintray.io/kuma-prometheus-sd:1.0.0
imagePullPolicy: IfNotPresent
args:
- run
- --name=kuma-prometheus-sd
- --cp-address=grpc://kuma-control-plane.kuma-system:5676
- --output-file=/etc/prometheus/config_out/kuma.file_sd.json
resources:
limits:
cpu: 100m
memory: 25Mi
volumeMounts:
- mountPath: /etc/prometheus/config_out
name: config-out
Closing as it's deprecated.
Automatically closing the issue due to having one of the "closed state label".
Problem
Our current implementation of Prometheus integration on K8S relies on adding
prometheus.io/scrape
,prometheus.io/path
andprometheus.io/port
annotations. The problem is that those annotations are placed on Pod and there is support only for 1 value which means that scraping metrics from both a dataplane and an application is not possible.The second problem is that those annotations are placed by injector therefore after turning the metrics on, you have to redeploy for pods.
Explored Solutions
Prometheus Operator
Prometheus Operator exposes some CRD which enables the user to configure additional scraping targets. I tried to use
PodMonitor
but it seems to work only with labels, not with annotations. Not sure if adding labels by injector is a good idea. It also requires to coordinate configuration of port/path in mesh and here. Even so, this solution would only work for Kubernetes Operator, we need more broad approach.
Kubernetes SD
Prometheus ships with Kubernetes SD, which let's you define from which pods you could scrape the metrics. Assuming that injector would add following annotations
prometheus.metrics.kuma.io/port
,prometheus.metrics.kuma.io/path
,prometheus.metrics.kuma.io/scrape
. We can apply following configPros:
Cons:
Note: you can't see all labels from the pod, but those can be mapped using action: labelmap
Kuma Prometheus SD
To integrate Prometheus on Universal environments we introduced kuma-prometheus-sd binary which generates file that is later consumed by prometheus' file_sd plugin. This can be also used on K8S. You have to deploy this binary next to prometheus server.
To do this you have to modify Prometheus server deployment adding new container
and volume that is used for both containers
and mount a new volume into existing container
Then you need to change the Prometheus config
Pros:
Cons:
Summary
Using Kuma Prometheus SD seems to be the best option from the end user perspective (enabling/disabling metrics, querying by mesh). The question is - is there a way to improve UX of configuring Prometheus this way or is it good enough with the proper instruction?