prometheus-community / helm-charts

Prometheus community Helm charts
Apache License 2.0
5.11k stars 5.02k forks source link

[kube-prometheus-stack] ValidationError(Prometheus.spec): unknown field "scrapeConfigNamespaceSelector" #3680

Open david-nano opened 1 year ago

david-nano commented 1 year ago

Describe the bug a clear and concise description of what the bug is.

When trying to upgrade the stack from version 45.6.0 to 48.3.0, the helm upgrade return error:

Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(Prometheus.spec): unknown field "scrapeConfigNamespaceSelector" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "scrapeConfigSelector" in com.coreos.monitoring.v1.Prometheus.spec]

What's your helm version?

version.BuildInfo{Version:"v3.12.2", GitCommit:"1e210a2c8cc5117d1055bfaa5d40f51bbc2e345e", GitTreeState:"clean", GoVersion:"go1.20.5"}

What's your kubectl version?

Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4", GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean", BuildDate:"2023-07-19T12:20:54Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.9", GitCommit:"9710807c82740b9799453677c977758becf0acbb", GitTreeState:"clean", BuildDate:"2022-12-08T10:08:06Z", GoVersion:"go1.18.9", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (1.27) and server (1.24) exceeds the supported minor version skew of +/-1

Which chart?

kube-prometheus-stack

What's the chart version?

45.6.0 -> 48.3.0

What happened?

Getting error as desribe

What you expected to happen?

Do helm upgrade without doing anything

How to reproduce it?

Use the same values the fit the stack like I've provided (simple as that) and install it on your own cluster. Once it deployed, try to change version and do helm upgrade

Enter the changed values of values.yaml?

kube-prometheus-stack:
  prometheus:
    ingress:
      enabled: true
      ingressClassName: nginx
      tls:
      hosts:
        - prometheus.dc-infra.local
    prometheusSpec:
      additionalScrapeConfigs:
        - job_name: nginx-controller
          scrape_interval: 10s
          static_configs:
            - targets: [ "ingress-nginx-controller-controller-metrics.nginx-controller.svc.cluster.local:10254" ]

Enter the command that you execute and failing/misfunctioning.

helm upgrade -n monitoring kube-prometheus-stack kube-prometheus-stack-umbrella/ (the umbrella since I'm getting other stuff)

Anything else we need to know?

No response

zeritti commented 1 year ago

Is it possible that CRDs have not been upgraded yet? Prometheus operator CRDs have to be upgraded before upgrading the chart to release 48.

david-nano commented 1 year ago

Thanks @zeritti , it's my bad. I've tried now but getting this error:

error: Apply failed with 1 conflict: conflict with "helm" using apiextensions.k8s.io/v1: .metadata.annotations.controller-gen.kubebuilder.io/version
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
zeritti commented 1 year ago

That error points to the fact that currently installed CRDs are owned by helm, i.e. were installed as a chart, probably prometheus-operator-crds. Please, check in your namespace, if it is so by helm list -n NAMESPACE. I reckon it is and you should upgrade the CRDs first through the appropriate release of same chart. I assumed they were installed with the stack chart when refrerring to the upgrade notes.

david-nano commented 1 year ago

Well, you're right, they might be installed with the stack. So do I need --force-conflicts ?

jmtt89 commented 1 year ago

I have the same error and yes, the reason is because "currently installed CRDs are owned by helm" but i don't use prometheus-operator-crds just follow the docs of kube-prometheus-stack

when install kube-prometheus-stack using helm the CRD are marked to be manage by helm, you can check with:

$> kubectl get crd alertmanagers.monitoring.coreos.com --show-managed-fields -o yaml
...
  managedFields:
  - apiVersion: apiextensions.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      ...
    manager: helm
    operation: Update
    time: "2023-06-14T18:06:53Z"
...

so basically, you can't upgrade the CRD manually because you used helm to install them, and you can't use helm to upgrade kube-prometheus-stack because because you need to upgrade the CRDs before update the chart...

I think the only options we have are use use "--force-conflicts" or delete all the CRDs :/

david-nano commented 1 year ago

@zeritti any insight about this?

zeritti commented 1 year ago

What @jmtt89 writes above is correct. If the CRDs have been installed with the stack chart, they will not be upgraded by helm and fields to update will have to change their manager eventually, in this case from helm to kubectl. Using --force-conflicts will indeed resolve the conflict by changing the field manager and update the fields. More info in server-side apply.

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.