prometheus-community / helm-charts

Prometheus community Helm charts
Apache License 2.0
5.06k stars 5.01k forks source link

[prometheus-kube-stack] latest chart Installation issue hostNetwork #2753

Closed wittymindstech closed 1 year ago

wittymindstech commented 1 year ago

Describe the bug a clear and concise description of what the bug is.

I am using this command to install the latest version : helm install prom-issue prometheus-community/kube-prometheus-stack -f gauravdoc/values-kps.yml -n observability --version=42.1.0 But, i am getting below error:

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec

Please help.

What's your helm version?

version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}

What's your kubectl version?

Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.9", GitCommit:"9dd794e454ac32d97cde41ae10be801ae98f75df", GitTreeState:"clean", BuildDate:"2021-03-18T01:00:06Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

Which chart?

kube-prometheus-stack

What's the chart version?

42.1.0

What happened?

I am using below command(s) to install the latest version :

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prom-issue prometheus-community/kube-prometheus-stack -f gauravdoc/values-kps.yml -n observability --version=42.1.0

But, i am getting below error:

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec

What you expected to happen?

I expected latest kube-prometheus-stack to be installed in my cluster.

How to reproduce it?

Run:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prom-issue prometheus-community/kube-prometheus-stack -f gauravdoc/values-kps.yml -n observability --version=42.1.0

Enter the changed values of values.yaml?

NONE

Enter the command that you execute and failing/misfunctioning.

helm install prom-issue prometheus-community/kube-prometheus-stack -f gauravdoc/values-kps.yml -n observability --version=42.1.0

Anything else we need to know?

No response

sourabhgupta385 commented 1 year ago

@wittymindstech Did you installed the crds first? kubectl apply -f crds/

joris-weijters commented 1 year ago

I've got a similar error however not during installation but just during a helm diff.

Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec I also get this error with version 42.0.3, and in 41.9.1

wittymindstech commented 1 year ago

@sourabhgupta385 I was not applying this earlier. Anyways i applied and got the below warning and error:

Warning: resource customresourcedefinitions/thanosrulers.monitoring.coreos.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured

The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

zeritti commented 1 year ago

@sourabhgupta385 I was not applying this earlier. Anyways i applied and got the below warning and error:

Warning: resource customresourcedefinitions/thanosrulers.monitoring.coreos.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured

The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

When upgrading CRDs, one should prefer kubectl apply --server-side, see also chart's upgrade notes.

wittymindstech commented 1 year ago

@zeritti Yes, it works . Thanks! I also enforced it using --force-conflicts, i was getting some conflicts. Thank you @sourabhgupta385 too.

sourabhgupta385 commented 1 year ago

kubectl replace -f crds/ is also an option for someone looking for solution in this thread.

Thanks for info @zeritti and Welcome @wittymindstech

witchbutter commented 1 year ago

I'm misunderstanding the workaround here. Is the kube-prometheus-stack chart intended to install the CRDs and it happens to be failing in this case or are we supposed to be installing the CRDs separately as the standard?

zeritti commented 1 year ago

I'm misunderstanding the workaround here. Is the kube-prometheus-stack chart intended to install the CRDs and it happens to be failing in this case or are we supposed to be installing the CRDs separately as the standard?

I do not think that installing the CRDs has changed - Helm installs them on install out of templates/crds unless they are already present in the cluster (this should work with all Helm 3 releases). If present, Helm skips the upgrade. If upgrading the CRDs is required due to e.g. a new release of the operator being installed or templates newly supporting a CRD field, the user/deployment process has to take care of the upgrade. The main reason for the unknown field error are outdated CRDs leading to the operator not recognising the field.

Installation and upgrade of the CRDs may change or get another means in the future if #2697 gets merged.

Archanadorepalli commented 1 year ago

Hi Everyone, I am trying to install Prometheus on Azure Kuberenets service (AKS Version 1.24.6, Helm Version:3.10) But i am facing below issue. Can anyone help me on this ? I did not understand the concept of CRDs.

Command :helm install prometheus prometheus-community/kube-prometheus-stack -n prometheus Error I am getting: Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec

Thanks

NibiruHeisenberg commented 1 year ago

@Archanadorepalli try running

kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml

prior to installing the helm chart.

jinnerbichler commented 1 year ago
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.61.1/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml

was working for me.

DodgeCamaro commented 1 year ago

same error with 43.0.0

rgaduput commented 1 year ago

Was facing same. Deletion of existing CRD's on cluster from previous installations before helm install helped me. kubectl delete crd alertmanagerconfigs.monitoring.coreos.com alertmanagers.monitoring.coreos.com podmonitors.monitoring.coreos.com probes.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com thanosrulers.monitoring.coreos.com

villesau commented 1 year ago

Is there any way to fix this without adhoc kube commands? Our infra is fully managed via terraform & helm and this is the first time a manual command apparently needs to be issued.

zeritti commented 1 year ago

Is there any way to fix this without adhoc kube commands? Our infra is fully managed via terraform & helm and this is the first time a manual command apparently needs to be issued.

A new chart of the community, prometheus-operator-crds, has been made available with the latest release 0.1.1. Deploying a chart to install/upgrade CRDs will certainly fit in many environments.

leonardoirepa commented 1 year ago

Was facing same. Deletion of existing CRD's on cluster from previous installations before helm install helped me. kubectl delete crd alertmanagerconfigs.monitoring.coreos.com alertmanagers.monitoring.coreos.com podmonitors.monitoring.coreos.com probes.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com thanosrulers.monitoring.coreos.com

leonardoirepa commented 1 year ago

Tks @rgaduput , this works for me!

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

rosehsu47 commented 1 year ago

same here

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "shards" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "tsdb" in com.coreos.monitoring.v1.Prometheus.spec]
jz543fm commented 1 year ago

kube-prometheus-stack-33.0.0 same

decipher27 commented 1 year ago

This works - https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#from-42x-to-43x

llamahunter commented 1 year ago

Was facing same. Deletion of existing CRD's on cluster from previous installations before helm install helped me. kubectl delete crd alertmanagerconfigs.monitoring.coreos.com alertmanagers.monitoring.coreos.com podmonitors.monitoring.coreos.com probes.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com thanosrulers.monitoring.coreos.com

DO NOT do this. Deleting your CRDs will cause any resources that reference them to also be deleted. You will then have to go reconstruct everything that was using an AlertmanagerConfig or PrometheusRule or ServiceMonitor.

rajeshyadavttn commented 1 year ago

you can use version --version=36.2.0 helm install prom-issue prometheus-community/kube-prometheus-stack -f gauravdoc/values-kps.yml -n observability --version=36.2.0 no require apply crds

ManojSuyal commented 1 year ago

I'm also getting the same error while trying to deploy it in AKS version 1.24. And i tried two chart version 45.20 & 45.23 , however these charts works file on kubernetes installed in virtual box lab environment.

error: error validating ".\prometheus_stack.yaml": error validating data: [ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "tsdb" in com.coreos.monitoring.v1.Prometheus.spec]; if you choose to ignore these errors, turn validation off with --validate=false

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

stale[bot] commented 1 year ago

This issue is being automatically closed due to inactivity.