Closed swestcott closed 7 years ago
The PrometheusSpec has the externalLabels
field, which is just a yaml map of label-value pairs. Let me know if you have any other questions.
@brancz Thanks for the prompt response, that's exactly what I'm looking for 👍
Glad I could help :slightly_smiling_face:
Can you please tell me in this externalLabels
fields how can i pass my cluster name if I have to pick it this way - kubectl config current-context
.
@deboshrestha you will need to do that somehow with your configuration management.
Thanks! I tried using configmaps here but running into an issue - so opened this one - https://github.com/coreos/prometheus-operator/issues/1325.
Hi! I am new in k8s and prometheus operator too and have problems with understanding how to add external_labels for my prometheus-kube-prometheus deployment created by helm.
I am trying to edit prometheus.yaml getting by
kubectl get secret -n monitoring prometheus-kube-prometheus -ojson | jq -r '.data["prometheus.yaml"]' | base64 -d
but its impossible and wrong way I think.
You need to just add the labels you want to the Prometheus object under .spec.externalLabels
Where I can get this endpoint? Via API? Can you please show me how its possible to add?
I don't know about helm but in the Prometheus object you can just add the externalLabels
field. You can list the objects:
kubectl -n <your-namespace> get prometheus
And then directly edit it:
kubectl -n <your-namespace> edit prometheus <your-prometheus-name>
Yes its true for simple Prometheus deployment but I use Prometheus Operator and kube-prometheus where prometheus.yaml is generated dinamically by the service, so I cant find way to add static general parametrs such a external_labels.
The Prometheus Operator creates a new kind of object called Prometheus. It would probably be good to read the reader of the Prometheus operator and just go through the getting started, to get a feeling of the concepts involved.
@reddare we are doing it the same way i.e. kube-prometheus
and where the manifests has the prometheus.yaml
file. Here what we are doing is as @brancz mentioned by adding externalLabels
field in that yaml. If you notice when we deploy it there is a script being called. In there we are doing a simple sed which is adding that line in the prometheus.yaml
file based on the current-context of the k8s cluster.
Hi guys, I am evaluating Cortex as a backend so it is required externalLabels to be something like:
externalLabels: cluster: ${CLUSTER_NAME} replica: ${REPLICA_NAME}
I was able to fetch Cluster Name, but since I am using Helm Operator with 2 replicas HA for Prometheus, not sure how can Replica (or POD) name can be retrieved dynamically. Please, any suggests?
Hi,
Try next
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
prometheus: k8s
name: k8s
namespace: <namespace>
spec:
replicaExternalLabelName: __replica__
prometheusExternalLabelName: cluster
externalLabels:
datacenter: <dc_name>
cluster: <cluster_name>
remoteWrite:
- url: "<address>>"
......
......
......
Hi guys, I am evaluating Cortex as a backend so it is required externalLabels to be something like:
externalLabels: cluster: ${CLUSTER_NAME} replica: ${REPLICA_NAME}
I was able to fetch Cluster Name, but since I am using Helm Operator with 2 replicas HA for Prometheus, not sure how can Replica (or POD) name can be retrieved dynamically. Please, any suggests?
Wondering the same thing, how did you manage this by chance? My prometheus just stands up with the error:
level=error ts=2020-10-09T01:49:34.491Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="parsing YAML file /etc/prometheus/config_out/prometheus.env.yaml: \"${CLUSTER_NAME}\" is not a valid label name"
I'm adding the env var to prom as:
spec:
replicas: 2
serviceAccountName: prometheus
version: "v2.21.0"
containers:
- name: prometheus
env:
- name: CLUSTER_NAME
value: testing
podMetadata:
labels:
app: powerflex-prometheus
externalLabels:
env: edge
prometheusExternalLabelName: ${CLUSTER_NAME}
replicaExternalLabelName: ${CLUSTER_NAME}-replica
What am I doing wrong here?
@Kampe , I solved this by using replicaExternalLabelName: "__replica__"
:
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L1670
This piece adds replica key using Pod name as value for it.
For cluster name I used:
externalLabels:
cluster: cluster-name
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L1666
Hope it helps.
The correct way to do this for all future comers, add env vars on the prometheus-config-reloader
container and reference them with $(ENV)
in your yaml. The replicaExternalLabelName
should remain default in most all cases as its handled by the operator and the default labels it adds on replicas prometheus_replica
is valid for most federation tools, Thanos etc
Example:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: monitoring
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
version: "v2.21.0"
containers:
- name: prometheus-config-reloader
env:
- name: CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: configuration
key: CLUSTER_NAME
podMetadata:
labels:
app: prometheus
externalLabels:
cluster: $(CLUSTER_NAME)
This should really be documented somewhere visible.
We just wanted to add that the example given above does not appear to work with v0.49.0; I think that a change in #3955 caused this, as well as a prior change to the container name.
A snippet of what works for us:
...
initContainers:
- name: init-config-reloader
env:
- name: CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: cluster-variables
key: CLUSTER_NAME
externalLabels:
cluster: $(CLUSTER_NAME)
...
Notice these changes:
containers -> initContainers
prometheus-config-reloader -> init-config-reloader
As a user, I'd give a huge +1 to @Kampe's suggestion that this should be documented somewhere, and ideally kept up to date as well.
Hi. I am new to using prometheus external labels. In our development cluster, we have around 20 namespaces and each have their helm release files separately. We would like to grab the slack channel names from appropriate namespaces and use it for routing. The problem here is namespace able to exhibit the slack channel name but prometheus not able to pick it up from namespaces due to flux-system gotk_reconcile_metrics has only labels query for kind, name, namespace. After seeing the comments here, would like to know is there a possibility to get the slack channel labels from namespaces using externallabels in prometheusspec and grab or overwrite the key from different namespaces?
We just wanted to add that the example given above does not appear to work with v0.49.0; I think that a change in #3955 caused this, as well as a prior change to the container name.
A snippet of what works for us:
... initContainers: - name: init-config-reloader env: - name: CLUSTER_NAME valueFrom: configMapKeyRef: name: cluster-variables key: CLUSTER_NAME externalLabels: cluster: $(CLUSTER_NAME) ...
Notice these changes:
containers -> initContainers
prometheus-config-reloader -> init-config-reloader
As a user, I'd give a huge +1 to @Kampe's suggestion that this should be documented somewhere, and ideally kept up to date as well.
but env set in init-config-reloader does not exist in the pod env. why?
From what I see you would like to have loading configuration options from environment variables in externalLabels
parameter. As of today this option is not available and changing externalLabels
should be handled externally (either by helm or jsonnet).
We will be open to revising our stance when https://prometheus.io/docs/prometheus/latest/feature_flags/#expand-environment-variables-in-external-labels is considered a stable solution.
Since the proposed "solution" is a dirty workaround, I don't see why it should be documented. Especially since it would be easier to just run prometheus with expand-external-labels
(using enableFeatures
option in PrometheusSpec) and do the strategic merge patch of env
list in prometheus container.
Hi @paulfantom @Kampe , I am using prometheus chart (not prometheus operator) https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus. I am trying to apply external_labels under global field, but it seems all the replicas are fetching the same external_labels resulting in Thanos giving "dropping store, external labels are not unique" for one of the replicas. Any way I can add dynamic labels ? Thanks
Alright y'all, here again with another workaround/solution if you're facing issues with external labels and environment variables:
As of 0.49.0, you'll need to not only add in the environment variable for the config-reloader
container like above, but ALSO you must supply the environment variable for the init-config-reloader
container too or else you'll find your prometheus cannot start with errors in initialization of the pod. As far as I know, you'll also need to set expand-external-labels
as a feature as well.
Like so:
spec:
initContainers:
- name: init-config-reloader
env:
- name: CLUSTER_ID
valueFrom:
configMapKeyRef:
name: cluster-information
key: cluster-name
containers:
- name: config-reloader
env:
- name: CLUSTER_ID
valueFrom:
configMapKeyRef:
name: cluster-information
key: cluster-name
enableFeatures:
- expand-external-labels
externalLabels:
cluster_id: $(CLUSTER_ID)
@Kampe , I solved this by using
replicaExternalLabelName: "__replica__"
: https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L1670 This piece adds replica key using Pod name as value for it.For cluster name I used:
externalLabels:
cluster: cluster-name
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L1666Hope it helps.
Let me summarize
I have twenty-five prometheus pods in five k8s cluster, every cluster have five prometheus pods
set the replicaExternalLabelName: "__replica__"
config in chart value.yaml
use prometheus feature flages --enable-feature=expand-external-labels
the helm chart's value.yml config is below
externalLabels:
prometheus_replica: $(POD_NAME)
enableFeatures:
- expand-external-labels
Friendly Tips:the $(POD_NAME) valiable has been defined in helm chart resource file(kube-prometheus-stack). You don't need to define this variable again
@Hello-Linux Hi, those fields should be added under prometheusSpec ?
overriding 'cluster' is not possible
overriding 'cluster' is not possible
@bd-spl what do you mean by that?
I have the problem that configuring
externalLabels:
cluster: "mycluster"
works for kind PrometheusAgent
but not for Prometheus
in my prometheus operator deployments
Suppose we have two independent Kubernetes clusters,
prod
andpre-prod
. For a given service which is deployed to both clusters we deploy the same Prometheus Rules. Onpre-prod
we want alerts to be routed to Slack, while onprod
they should be routed to Slack and PagerDuty.To achieve this vanilla Prometheus, it looks like we should apply an external label to the global Prometheus config and use this in the AlertManager routing config e.g.
How can we achieve this the Prometheus Operator?