helm / helm

The Kubernetes Package Manager
https://helm.sh
Apache License 2.0
27.06k stars 7.12k forks source link

Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: resource mapping not found for name: "00-rook-privileged" namespace: "" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1" #11287

Closed ruckc closed 1 year ago

ruckc commented 2 years ago

I upgraded my cluster to 1.25 with existing helm charts installed with PodSecurityPolicies. The upgrade process left everything in a working state, but i'm unable to upgrade my helm charts due to the missing api resources.

It would be nice if there was a method for upgrade that supported ignoring missing resource mappings.

I tried using helm mapkubeapis to repair, but it only replaces apiVersion/kind pairings, it doesn't support removing them from the release manifest.

I also tried manually manipulating the helm release secret, but a helm upgrade just errored out with Error: UPGRADE FAILED: release: already exists.

While I understand the ideal order of operations is upgrade helm charts to remove these resources first, sometimes humans make mistakes, and ideally helm should support ways of recovering without trashing the stateful resources in a malfunctioning helm chart.

For a temporary band-aid, I used helm template ... | kubectl apply -f -, but as it doesn't update the helm release secrets, I will be forced to use helm template going forward.

Output of helm version:

version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.17.13"}

Output of kubectl version:

version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.17.13"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): kubeadm cluster

joejulian commented 2 years ago

It's not a pretty answer, but it may get you close to what you're looking for.

If you remove the release secret(s) then install again, since the annotations match helm should be able to adopt the resources in-situ producing a clean release secret.

taxilian commented 2 years ago

It's not a pretty answer, but it may get you close to what you're looking for.

If you remove the release secret(s) then install again, since the annotations match helm should be able to adopt the resources in-situ producing a clean release secret.

Would that overwrite any of the settings or anything? I'm having the same issue -- wishing that --force would just ignore this type of issue -- but it's a production system and I really would prefer not to lose my ceph cluster and all shared storage because of a mistake on something like this....

joejulian commented 2 years ago

I would create a test cluster and confirm that.

dkrizic commented 2 years ago

I did that, I removed all secrets "sh.helm.release.v1.rook-ceph" and simply run a

helm -n rook-ceph upgrade --install rook-ceph -f values.yaml rook-release/rook-ceph

technically the helm release was removed and re-added and all resources overridden.

taxilian commented 2 years ago

With all resources overridden -- does that break the deployment? Or does it just override things to effectively what it was anyway?

fastlorenzo commented 2 years ago

I did that, I removed all secrets "sh.helm.release.v1.rook-ceph" and simply run a

helm -n rook-ceph upgrade --install rook-ceph -f values.yaml rook-release/rook-ceph

technically the helm release was removed and re-added and all resources overridden.

I can confirm doing that worked for me as well, took a minute for the upgrade/install to happen, no disruption or loss of data.

portega-inbrain commented 1 year ago

Hi @taxilian. I'm having a similar issue. In my case I'm running

helm upgrade --install aws-node-termination-handler \
   --namespace kube-system \
   eks/aws-node-termination-handler --force

This instruction runs in an AWS instance as part of other instructions and configurations using ansible and returns the error

Error: unable to build kubernetes objects from release manifest: resource mapping
not found for name: aws-node-termination-handler namespace:  from : no matches 
for kind PodSecurityPolicy in version policy/v1beta1 
ensure CRDs are installed first

My question is, wherecan I find those secrets in that instance? I'd like to try your solution. Many thanks!

taxilian commented 1 year ago

I would guess it would be in the kube-system namespace, given that is where your helm chart is. it would be called something like "sh.helm.release.????"

Note that this is going to recreate all resources for the chart, so whether or not it will work depends on the helm chart itself.

github-actions[bot] commented 1 year ago

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

joejulian commented 1 year ago

There's a solution posted in this issue, and another solution using https://github.com/helm/helm-mapkubeapis that was posted in a different issue.

Closing this issue as solved.

tjsampson commented 11 months ago

For those that stumble here... I ran into a similar issue when attempting to upgrade prometheus-node-exporter. Here is what I did to solve it:

Find the secrets (use the correct namespace)

kubectl -n prometheus get secrets

Nuke all of the helm secrets that are causing the issue. For me, it was 'prometheus-node-exporter'

kubectl -n prometheus delete secrets/sh.helm.release.v1.prometheus-node-exporter.v1
kubectl -n prometheus delete secrets/sh.helm.release.v1.prometheus-node-exporter.v2
kubectl -n prometheus delete secrets/sh.helm.release.v1.prometheus-node-exporter.v3
kubectl -n prometheus delete secrets/sh.helm.release.v1.prometheus-node-exporter.v4
kubectl -n prometheus delete secrets/sh.helm.release.v1.prometheus-node-exporter.v5
...
kubectl -n prometheus delete secrets/sh.helm.release.v1.prometheus-node-exporter.v14

Yes, there were 14 versions in there...some as old as 2+ years.

After that, I was able to install/upgrade the prometheus-node-exporter helm chart without an issue.

I am pretty sure this bit me once before like 8 or 9 months back. So I imagine I will stumble back on my own comment the next time this happens and I forget how to solve it! 👋 Future @tjsampson!