Closed ikirianov closed 2 years ago
Works for me, on a kind cluster:
helm install kps prometheus-community/kube-prometheus-stack -n default --version=34.5.0
NAME: kps
LAST DEPLOYED: Mon Mar 28 18:51:04 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace default get pods -l "release=kps"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Regarding the error you should check if the crd was installed? You're maybe trying to install to a cluster with a previously installed kube-prometheus-stack, so the old crd version is used as helm does not update the crd for you. See: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#install-a-crd-declaration-before-using-the-resource
thank you, @monotek for pointing out to check whether crds for prometheus were previously installed! I checked and sure enough found them: alertmanagerconfigs.monitoring.coreos.com 2022-03-25T21:33:38Z alertmanagers.monitoring.coreos.com 2021-08-30T18:48:10Z prometheuses.monitoring.coreos.com 2021-08-30T18:48:26Z prometheusrules.monitoring.coreos.com 2021-08-30T18:48:28Z podmonitors.monitoring.coreos.com 2021-08-30T18:48:21Z probes.monitoring.coreos.com 2021-08-30T18:48:23Z servicemonitors.monitoring.coreos.com 2021-08-30T18:48:30Z thanosrulers.monitoring.coreos.com 2021-08-30T18:48:33Z
Trying to use --skip-crds in my installation but results in the same error:
# helm install --skip-crds prometheus prometheus-community/kube-prometheus-stack -n monitoring
Error: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigNamespaceSelector" in com.coreos.monitoring.v1.Alertmanager.spec, ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigSelector" in com.coreos.monitoring.v1.Alertmanager.spec]
Check the version of the crds. The date indicates it's an old version. You need to update them manually. Helm will not do that for you.
I also encountered similar. Failed to install (It worked before with new ServiceMonitors) When I edited the helmfile additionalServiceMonitors and uninstalled the helm chart. It is stucked with previous failed state :(
I have no idea how to solve it as I tried hours on it.
FAILED RELEASES: NAME kube-prometheus-stack in ./helmfile.yaml: failed processing release kube-prometheus-stack: command "/usr/local/bin/helm" exited with non-zero status:
PATH: /usr/local/bin/helm
ARGS: 0: helm (4 bytes) 1: upgrade (7 bytes) 2: --install (9 bytes) 3: --reset-values (14 bytes) 4: kube-prometheus-stack (21 bytes) 5: prometheus-community/kube-prometheus-stack (42 bytes) 6: --create-namespace (18 bytes) 7: --namespace (11 bytes) 8: monitoring (10 bytes) 9: --values (8 bytes) 10: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1552755465/monitoring-kube-prometheus-stack-values-bdd7bb589 (117 bytes) 11: --values (8 bytes) 12: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile294851738/monitoring-kube-prometheus-stack-values-58c9fd46b9 (117 bytes) 13: --values (8 bytes) 14: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1573017694/monitoring-kube-prometheus-stack-values-7556779458 (118 bytes) 15: --values (8 bytes) 16: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1166246359/monitoring-kube-prometheus-stack-values-68c8448f8 (117 bytes) 17: --values (8 bytes) 18: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1244192024/monitoring-kube-prometheus-stack-values-65654d64cd (118 bytes) 19: --values (8 bytes) 20: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile3832102784/monitoring-kube-prometheus-stack-values-55c899885d (118 bytes) 21: --values (8 bytes) 22: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile2931753399/monitoring-kube-prometheus-stack-values-5f6798b8cc (118 bytes) 23: --values (8 bytes) 24: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1049642655/monitoring-kube-prometheus-stack-values-768b56d944 (118 bytes) 25: --values (8 bytes) 26: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile60257564/monitoring-kube-prometheus-stack-values-75b5578ff (115 bytes) 27: --values (8 bytes) 28: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1897511646/monitoring-kube-prometheus-stack-values-7564ccd845 (118 bytes) 29: --values (8 bytes) 30: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile349819710/monitoring-kube-prometheus-stack-values-7db7df679 (116 bytes) 31: --values (8 bytes) 32: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile3247665417/monitoring-kube-prometheus-stack-values-85f67655c6 (118 bytes) 33: --values (8 bytes) 34: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile3581621848/monitoring-kube-prometheus-stack-values-566d97b59c (118 bytes) 35: --values (8 bytes) 36: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile2503916089/monitoring-kube-prometheus-stack-values-67bb47d5d (117 bytes) 37: --values (8 bytes) 38: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1132192282/monitoring-kube-prometheus-stack-values-976f95d6f (117 bytes) 39: --values (8 bytes) 40: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile1032942234/monitoring-kube-prometheus-stack-values-cdb69dd76 (117 bytes) 41: --values (8 bytes) 42: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile3306848312/monitoring-kube-prometheus-stack-values-5c776bfcb5 (118 bytes) 43: --values (8 bytes) 44: /var/folders/k3/r6c8pgmd2vj6w5rddvzjv8t00000gn/T/helmfile64536416/monitoring-kube-prometheus-stack-values-78cbc9fffd (116 bytes) 45: --history-max (13 bytes) 46: 10 (2 bytes)
ERROR: exit status 1
EXIT STATUS 1
STDERR: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(ServiceMonitor.spec.endpoints[0].relabelings[0]): unknown field "source_labels" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[0].relabelings[0]): unknown field "target_label" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[1].relabelings[0]): unknown field "source_labels" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[1].relabelings[0]): unknown field "target_label" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[2].relabelings[0]): unknown field "source_labels" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[2].relabelings[0]): unknown field "target_label" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings]
COMBINED OUTPUT: Release "kube-prometheus-stack" does not exist. Installing it now. Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(ServiceMonitor.spec.endpoints[0].relabelings[0]): unknown field "source_labels" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[0].relabelings[0]): unknown field "target_label" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[1].relabelings[0]): unknown field "source_labels" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[1].relabelings[0]): unknown field "target_label" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[2].relabelings[0]): unknown field "source_labels" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings, ValidationError(ServiceMonitor.spec.endpoints[2].relabelings[0]): unknown field "target_label" in com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings]
This is the values helmfile:
values:
- namespaceOverride: "monitoring"
- defaultRules:
create: true
rules:
alertmanager: true
etcd: true
configReloaders: true
general: true
k8s: true
kubeApiserver: true
kubeApiserverAvailability: true
kubeApiserverSlos: true
kubelet: true
kubeProxy: true
kubePrometheusGeneral: true
kubePrometheusNodeRecording: true
kubernetesApps: true
kubernetesResources: true
kubernetesStorage: true
kubernetesSystem: true
kubeScheduler: true
kubeStateMetrics: true
network: true
node: true
nodeExporterAlerting: true
nodeExporterRecording: true
prometheus: true
prometheusOperator: true
- prometheus:
enabled: true
serviceAccount:
create: true
prometheusSpec:
disableCompaction: false
listenLocal: false
enableAdminAPI: false
image:
repository: quay.io/prometheus/prometheus
tag: v2.34.0
sha: ""
ruleSelectorNilUsesHelmValues: true
serviceMonitorSelectorNilUsesHelmValues: true
podMonitorSelectorNilUsesHelmValues: true
probeSelectorNilUsesHelmValues: true
ignoreNamespaceSelectors: true
replicas: 1
logLevel: info
logFormat: logfmt
routePrefix: /
podMetadata: {}
podAntiAffinityTopologyKey: kubernetes.io/hostname
securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
portName: "http-web"
additionalServiceMonitors:
- name: testapp-servicemonitor
jobLabel: testapp-metrics
selector:
matchLabels:
app.kubernetes.io/name: testapp-frontend
namespaceSelector:
matchNames:
- testapp
endpoints:
- port: metrics
- targetPort: "3000"
bearerTokenFile: "/var/run/secrets/kubernetes.io/serviceaccount/token"
interval: 15s
path: /metrics
scheme: http
targetLabels: []
- name: argo-rollouts-servicemonitor
jobLabel: rollouts-metrics
selector:
matchLabels:
app.kubernetes.io/name: argo-rollouts
namespaceSelector:
matchNames:
- argo-rollouts
endpoints:
- port: metrics
- targetPort: "8090"
bearerTokenFile: "/var/run/secrets/kubernetes.io/serviceaccount/token"
interval: 15s
path: /metrics
scheme: http
targetLabels: []
- name: argocd-repo-servicemonitor
jobLabel: argocd-repo-metrics
selector:
matchLabels:
app.kubernetes.io/name: argocd-repo-server
namespaceSelector:
matchNames:
- argocd
endpoints:
- port: metrics
- targetPort: "8081"
bearerTokenFile: "/var/run/secrets/kubernetes.io/serviceaccount/token"
interval: 15s
path: /metrics
scheme: http
targetLabels: []
I also encountered similar. Failed to install (It worked before with new ServiceMonitors) When I edited the helmfile additionalServiceMonitors and uninstalled the helm chart. It is stucked with previous failed state :(
I have no idea how to solve it as I tried hours on it.
FAILED RELEASES: NAME kube-prometheus-stack in ./helmfile.yaml: failed processing release kube-prometheus-stack: command "/usr/local/bin/helm" exited with non-zero status:
I can probably point to some formal issues. The relabelings config is not shown so I judge from the errors.
com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.source_labels
does not indeed exist in the spec. Instead, there is com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.sourceLabels[]
. Similarly for com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.source_label
which is also invalid.com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.target_label
is not included in the spec. Instead, there is com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.targetLabel
.prometheus.prometheusSpec.additionalServiceMonitors[]
where it should be prometheus.additionalServiceMonitors[]
.Ref. prometheus-operator API
@angelwancy & @ikirianov just do as the readme says: https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md#from-33x-to-34x
Run these commands to update the CRDs before applying the upgrade:
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.55.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
I also encountered similar. Failed to install (It worked before with new ServiceMonitors) When I edited the helmfile additionalServiceMonitors and uninstalled the helm chart. It is stucked with previous failed state :( I have no idea how to solve it as I tried hours on it. FAILED RELEASES: NAME kube-prometheus-stack in ./helmfile.yaml: failed processing release kube-prometheus-stack: command "/usr/local/bin/helm" exited with non-zero status:
I can probably point to some formal issues. The relabelings config is not shown so I judge from the errors.
com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.source_labels
does not indeed exist in the spec. Instead, there iscom.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.sourceLabels[]
. Similarly forcom.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.source_label
which is also invalid.- As above,
com.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.target_label
is not included in the spec. Instead, there iscom.coreos.monitoring.v1.ServiceMonitor.spec.endpoints.relabelings.targetLabel
.- It may just be formatting of the snippet as I see
prometheus.prometheusSpec.additionalServiceMonitors[]
where it should beprometheus.additionalServiceMonitors[]
.Ref. prometheus-operator API
Thank you for the tips. However it is strange that I modified relabelings, it still prompt out the same error. There are no target_label and source_labels under relabelings, instead, targetLabel and sourceLabels.
values:
- namespaceOverride: "monitoring"
- defaultRules:
create: true
rules:
alertmanager: true
etcd: true
configReloaders: true
general: true
k8s: true
kubeApiserver: true
kubeApiserverAvailability: true
kubeApiserverSlos: true
kubelet: true
kubeProxy: true
kubePrometheusGeneral: true
kubePrometheusNodeRecording: true
kubernetesApps: true
kubernetesResources: true
kubernetesStorage: true
kubernetesSystem: true
kubeScheduler: true
kubeStateMetrics: true
network: true
node: true
nodeExporterAlerting: true
nodeExporterRecording: true
prometheus: true
prometheusOperator: true
- prometheus:
enabled: true
serviceAccount:
create: true
additionalServiceMonitors:
- name: testapp-servicemonitor
jobLabel: testapp-metrics
selector:
matchLabels:
app.kubernetes.io/name: testapp-frontend
namespaceSelector:
matchNames:
- testapp
endpoints:
- port: metrics
- targetPort: "3000"
bearerTokenFile: "/var/run/secrets/kubernetes.io/serviceaccount/token"
interval: 15s
path: /metrics
scheme: http
relabelings:
- sourceLabels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-testapp-prom'
- action: labeldrop
regex: "__meta_kubernetes_pod_label_(.+)"
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: namespace
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: pod_name
- name: argo-rollouts-servicemonitor
jobLabel: rollouts-metrics
selector:
matchLabels:
app.kubernetes.io/name: argo-rollouts
namespaceSelector:
matchNames:
- argo-rollouts
endpoints:
- port: metrics
- targetPort: "8090"
bearerTokenFile: "/var/run/secrets/kubernetes.io/serviceaccount/token"
interval: 15s
path: /metrics
scheme: http
relabelings:
- sourceLabels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-rollouts-prom'
- action: labeldrop
regex: "__meta_kubernetes_pod_label_(.+)"
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: namespace
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: pod_name
- name: argocd-repo-servicemonitor
jobLabel: argocd-repo-metrics
selector:
matchLabels:
app.kubernetes.io/name: argocd-repo-server
namespaceSelector:
matchNames:
- argocd
endpoints:
- port: metrics
- targetPort: "8081"
bearerTokenFile: "/var/run/secrets/kubernetes.io/serviceaccount/token"
interval: 15s
path: /metrics
scheme: http
relabelings:
- sourceLabels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-argocd-repo-prom'
- action: labeldrop
regex: "__meta_kubernetes_pod_label_(.+)"
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: namespace
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: pod_name
prometheusSpec:
disableCompaction: false
listenLocal: false
enableAdminAPI: false
image:
repository: quay.io/prometheus/prometheus
tag: v2.34.0
sha: ""
ruleSelectorNilUsesHelmValues: true
serviceMonitorSelectorNilUsesHelmValues: true
podMonitorSelectorNilUsesHelmValues: true
probeSelectorNilUsesHelmValues: true
ignoreNamespaceSelectors: true
replicas: 1
logLevel: info
logFormat: logfmt
routePrefix: /
podMetadata: {}
podAntiAffinityTopologyKey: kubernetes.io/hostname
storageSpec:
# volumeClaimTemplate:
# spec:
# accessModes: ["ReadWriteOnce"]
# resources:
# requests:
# storage: 50Gi
securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
portName: "http-web"
I tried this also without luck
values:
- namespaceOverride: "monitoring"
- defaultRules:
create: true
rules:
alertmanager: true
etcd: true
configReloaders: true
general: true
k8s: true
kubeApiserver: true
kubeApiserverAvailability: true
kubeApiserverSlos: true
kubelet: true
kubeProxy: true
kubePrometheusGeneral: true
kubePrometheusNodeRecording: true
kubernetesApps: true
kubernetesResources: true
kubernetesStorage: true
kubernetesSystem: true
kubeScheduler: true
kubeStateMetrics: true
network: true
node: true
nodeExporterAlerting: true
nodeExporterRecording: true
prometheus: true
prometheusOperator: true
- prometheus:
enabled: true
serviceAccount:
create: true
prometheusSpec:
disableCompaction: false
listenLocal: false
enableAdminAPI: false
image:
repository: quay.io/prometheus/prometheus
tag: v2.34.0
sha: ""
ruleSelectorNilUsesHelmValues: true
serviceMonitorSelectorNilUsesHelmValues: true
podMonitorSelectorNilUsesHelmValues: true
probeSelectorNilUsesHelmValues: true
ignoreNamespaceSelectors: true
replicas: 1
logLevel: info
logFormat: logfmt
routePrefix: /
podMetadata: {}
podAntiAffinityTopologyKey: kubernetes.io/hostname
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
portName: "http-web"
additionalScrapeConfigs:
- job_name: testapp-metrics
scrape_interval: 15s
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- testapp
relabel_configs:
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__address__]
action: replace
regex: ([^:]+)(?::\d+)?
replacement: ${1}:3000
target_label: __address__
- source_labels: [__meta_kubernetes_pod_label_app]
action: keep
regex: testapp-frontend
- job_name: argo-rollouts-metrics
scrape_interval: 15s
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- argo-rollouts
relabel_configs:
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__address__]
action: replace
regex: ([^:]+)(?::\d+)?
replacement: ${1}:8090
target_label: __address__
- source_labels: [__meta_kubernetes_pod_label_app]
action: keep
regex: argo-rollouts
- job_name: argocd-repo-metrics
scrape_interval: 15s
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- argocd
relabel_configs:
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__address__]
action: replace
regex: ([^:]+)(?::\d+)?
replacement: ${1}:8081
target_label: __address__
- source_labels: [__meta_kubernetes_pod_label_app]
action: keep
regex: argocd-repo-server
@angelwancy Relying on the values file snippet as rendered above:
relabelings
is a child object of endpoints
and has to be indented further to the rightendpoints
is a list as shown. However, there are two elements seen in the list which you probably intend as one element (the hyphen at targetPort should not be there).@angelwancy Sorry, I forgot. According to the spec, targetPort
and port
are mutually exclusive, i.e. only one of them can be specified.
Check the version of the crds. The date indicates it's an old version. You need to update them manually. Helm will not do that for you.
Same issue. plz tell me how to update them manually.
It's in the readme.
It's in the readme.
Thanks a lot! I update crds manully according to the commands under "From 33.x to 34.x".
However, a new error appears when I want to install prometheus with helm helm install prometheus prometheus-community/kubeprometheus-stack --version "34.5.0"
:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource Alertmanager "prometheus-kube-prometheus-alertmanager" in namespace "default": the server was unable to return a response in the time allotted, but may still be processing the request (get alertmanagers.monitoring.coreos.com prometheus-kube-prometheus-alertmanager)
It's in the readme.
Thanks a lot! I update crds manully according to the commands under "From 33.x to 34.x".
However, a new error appears when I want to install prometheus with helm
helm install prometheus prometheus-community/kubeprometheus-stack --version "34.5.0"
:Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource Alertmanager "prometheus-kube-prometheus-alertmanager" in namespace "default": the server was unable to return a response in the time allotted, but may still be processing the request (get alertmanagers.monitoring.coreos.com prometheus-kube-prometheus-alertmanager)
After troubleshooting the whole day, I find that this "no response" error might be caused by other sevices in my cluster. Therefore helm is working smoothly.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused # I am able to install prometheus chart but facing issue with prometheus-stack chart.. I am installing it on Microk8s. It did work for me before on a different machine a month ago when I was testing.
Describe the bug a clear and concise description of what the bug is.
when running helm install command to install chart from repo the error is displayed...
command:
# helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
error: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigNamespaceSelector" in com.coreos.monitoring.v1.Alertmanager.spec, ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigSelector" in com.coreos.monitoring.v1.Alertmanager.spec]
What's your helm version?
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
What's your kubectl version?
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Which chart?
kube-prometheus-stack
What's the chart version?
34.5.0
What happened?
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigNamespaceSelector" in com.coreos.monitoring.v1.Alertmanager.spec, ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigSelector" in com.coreos.monitoring.v1.Alertmanager.spec]
What you expected to happen?
Expected the chart to install successfully
How to reproduce it?
run command: helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
Enter the changed values of values.yaml?
No response
Enter the command that you execute and failing/misfunctioning.
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
Anything else we need to know?
No response