prometheus-community / helm-charts

Prometheus community Helm charts
Apache License 2.0
5.11k stars 5.02k forks source link

How to remove the prefix of the prometheus pod name on kube-prometheus-stack of helm chart? #1175

Closed duanrex closed 3 years ago

duanrex commented 3 years ago

When deploying prometheus, the pod name after the deployment is always automatically prefixed with "prometheus".

How can I remove this prefix?

such as : fullnameOverride: "test-prometheus"

pod name: prometheus-test-prometheus.

Thanks

rmgpinto commented 3 years ago

I want to do the same, I've successfully removed the prefix of grafana, prometheus-node-exporter, and kube-state-metrics with the following:

grafana:
  fullnameOverride: "grafana"
kube-state-metrics:
  fullnameOverride: "kube-state-metrics"
prometheus-node-exporter:
  fullnameOverride: "prometheus-node-exporter"

But I'm still missing the prometheus, prometheus-operator and alertmanager pods, which are still named:

alertmanager-prometheus-kube-prometheus-alertmanager-0
prometheus-prometheus-kube-prometheus-prometheus-0
prometheus-kube-prometheus-operator-656fb5f5f4-p48zx

I've tried using the yaml below, but it didn't work.

alertmanager:
  alertmanagerSpec:
    fullnameOverride: alertmanager
torstenwalter commented 3 years ago

The easiest way to figure out how to do things like this is to look at the code. Let's start exploring it base on alertmanager.

You've been asking about kube-prometheus-stack. So we first need to check if it renders the template directly or if it uses a dependent chart.

https://github.com/prometheus-community/helm-charts/blob/2ed3afd270e9b7013e7ddf810e81fcfd5f57e123/charts/kube-prometheus-stack/Chart.yaml#L37-L49

Looks like it only has three dependencies and alertmanager is not one of them. So if alertmanager is not a transitive dependency of one of those three charts then the template must be somewhere else.

Here it is: https://github.com/prometheus-community/helm-charts/blob/2ed3afd270e9b7013e7ddf810e81fcfd5f57e123/charts/kube-prometheus-stack/templates/alertmanager/alertmanager.yaml#L1-L6

As you can see from the template the name is {{ template "kube-prometheus-stack.fullname" . }}-alertmanager. The only thing you could influence is the prefix but not the -alertmanager suffix.

This is the code for kube-prometheus-stack.fullname: https://github.com/prometheus-community/helm-charts/blob/2ed3afd270e9b7013e7ddf810e81fcfd5f57e123/charts/kube-prometheus-stack/templates/_helpers.tpl#L7-L25

rmgpinto commented 3 years ago

I tried values.fullnameOverride: pks but alertmanager pods are named alertmanager-pks-alertmanager-0 Is is possible to have the pods named as alertmanager-0 and prometheus-0?

duanrex commented 3 years ago

I tried values.fullnameOverride, the pod name still add the prefix "prometheus" on the pod automatic.

prometheus: fullnameOverride: "test" enabled: true serviceMonitor: selfMonitor: false prometheusSpec:

Are there other ways to deal with it? or can custom the prefix of pod name?

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

stale[bot] commented 3 years ago

This issue is being automatically closed due to inactivity.

lusson-luo commented 2 years ago

i has the same question,i set fullnameOverride or nameOverride in kube-prometheus-stack, but prometheus and alertmanager pod name after the deployment is always automatically prefixed with "prometheus".

i template kube-prometheus-stack to yaml file, alertmanager and prometheus yaml not has prefixed with "alertmanager|prometheus", i doubt it add but some crd.

how can I remove this prefix?

my alertmanager and prometheus template yaml

---
# Source: observable/charts/kube-prometheus-stack/templates/alertmanager/alertmanager.yaml
apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
  name: observable1-prometheus-alertmanager
  namespace: observable1
  labels:
    app: prometheus-alertmanager
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: observable1
    app.kubernetes.io/version: "39.9.0"
    app.kubernetes.io/part-of: prometheus
    chart: kube-prometheus-stack-39.9.0
    release: "observable1"
    heritage: "Helm"
spec:
  image: "alertmanager:v0.24.0"
  version: v0.24.0
  replicas: 1
  listenLocal: false
  serviceAccountName: observable1-prometheus-alertmanager
  externalUrl: "http://alertmanager.monitor.dev-009.devops-cloud.club/"
  paused: false
  logFormat: "logfmt"
  logLevel:  "info"
  retention: "120h"
  alertmanagerConfigSelector: {}
  alertmanagerConfigNamespaceSelector: {}
  resources:
    limits:
      cpu: 200m
      memory: 500Mi
    requests:
      cpu: 50m
      memory: 50Mi
  routePrefix: "/"
  securityContext:
    fsGroup: 2000
    runAsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  storage:
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: local-path
  tolerations:
    - effect: NoSchedule
      key: type
      operator: Equal
      value: infrastructure
  imagePullSecrets:
    - name: monitor-registry-secret
  portName: http-web
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: observable1-prometheus-prometheus
  namespace: observable1
  labels:
    app: prometheus-prometheus

    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: observable1
    app.kubernetes.io/version: "39.9.0"
    app.kubernetes.io/part-of: prometheus
    chart: kube-prometheus-stack-39.9.0
    release: "observable1"
    heritage: "Helm"
...

my pod name image image

lusson-luo commented 2 years ago

Let's start explori

not resolve, if Alertmanager name is prometheus-alertmanager, the pod name will is alertmanager-prometheus-alertmanager-0, can you open issue again? @torstenwalter @duanrex

ForbiddenEra commented 9 months ago

This definitely should be more obvious/easily configurable.

Setting a release name with helm to just prometheus results in stuff like prometheus-prometheus-kube-prometheus-prometheus-0, which is a bit ridiculous .. like the pod name doesn't need the word prometheus in it FOUR times ;-)

$: kubectl get all -n prometheus-system
NAME
pod/alertmanager-prometheus-kube-prometheus-alertmanager-0
pod/prometheus-grafana-xxxxxxxxx-xxxxx
pod/prometheus-kube-prometheus-operator-xxxxxxxxxx-xxxxx
pod/prometheus-kube-state-metrics-xxxxxxxxxx-xxxxx
pod/prometheus-prometheus-kube-prometheus-prometheus-0
pod/prometheus-prometheus-node-exporter-xxxxx

NAME
service/alertmanager-operated
service/prometheus-grafana
service/prometheus-kube-prometheus-alertmanager
service/prometheus-kube-prometheus-operator
service/prometheus-kube-prometheus-prometheus
service/prometheus-kube-state-metrics
service/prometheus-operated
service/prometheus-prometheus-node-exporter

NAME
daemonset.apps/prometheus-prometheus-node-exporter

NAME
deployment.apps/prometheus-grafana
deployment.apps/prometheus-kube-prometheus-operator
deployment.apps/prometheus-kube-state-metrics

NAME
replicaset.apps/prometheus-grafana-xxxxxxxxx
replicaset.apps/prometheus-kube-prometheus-operator-xxxxxxxxxx
replicaset.apps/prometheus-kube-state-metrics-xxxxxxxxxx

NAME
statefulset.apps/alertmanager-prometheus-kube-prometheus-alertmanager
statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus
henri9813 commented 5 months ago

Hello,

This issue is not fixed.

does someone has a fix for this ?

Permit to remove the redundant prometheus is clearly a needed feature in complex organization.

For example, I just look twice the name of the generated PVC:

prometheus-prometheus-stack-kube-prom-prometheus-db-prometheus-prometheus-stack-kube-prom-prometheus-0

did we break the world record ?

cjsmithuk commented 5 months ago

+1 on this.

It's absolutely crazy insane to have so many prometheuses in here. Smells like someone is suffering from concatenitis.

Can we please have a solution for this. The last thing I need is pager duty waking up at 3AM in the bloody morning shouting prometheus at me 15 times in a row. This literally happened to me!

Also can we educate stale-bot on user pain.

ForbiddenEra commented 4 months ago

Smells like someone is suffering from concatenitis.

LMAO :)

Not to be confused with concactunitis - such as where one has concatenated a cactus to their assus.

ringerc commented 2 months ago

The closest I found was https://github.com/prometheus-community/helm-charts/blob/273b18741f077207ad175bfe893475287df6ee99/charts/kube-prometheus-stack/templates/_helpers.tpl#L37-L44

https://github.com/prometheus-community/helm-charts/blob/273b18741f077207ad175bfe893475287df6ee99/charts/kube-prometheus-stack/templates/_helpers.tpl#L51-L58

which in helm values is

## Setting to true produces cleaner resource names, but requires a data migration because the name of the persistent volume changes. Therefore this should only be set once on initial installation.
##
cleanPrometheusOperatorObjectNames: true

It's not perfect; it doesn't give control over the prometheus resource name directly. But it's an improvement.

I landed up post-processing by using the kustomize helm chart inflator and a rather painful set of kustomize patches, as I didn't want to fork the upstream chart.