https://github.com/helm/helm/pull/7649 added a behaviour to Helm 3.2.0 where an install that would have otherwise failed because of pre-existing resources, can succeed by replacing such resources, if they have metadata that matches that helm install execution, i.e. annotations for meta.helm.sh/release-name and meta.helm.sh/release-namespace, and a label app.kubernetes.io/managed-by: Helm.
However, the documentation was overlooked, possibly because this was intended to be exposed to users via helm commandeer, which has now fallen out of the Helm 3.3 release milestone.
So someone should write-up my first sentence in a way that is actually consumable, and add it somewhere in the docs. Or write an even clearer description with examples and caveats.
We should also documenting the automatically-added meta.helm.sh annotations, and that the app.kubernetes.io/managed-by label is now added automatically, so doesn't need to be in the Best Practices doc, or perhaps can be called out there as "added by Helm anyway".
https://github.com/helm/helm/pull/7649 added a behaviour to Helm 3.2.0 where an install that would have otherwise failed because of pre-existing resources, can succeed by replacing such resources, if they have metadata that matches that
helm install
execution, i.e. annotations formeta.helm.sh/release-name
andmeta.helm.sh/release-namespace
, and a labelapp.kubernetes.io/managed-by: Helm
.However, the documentation was overlooked, possibly because this was intended to be exposed to users via
helm commandeer
, which has now fallen out of the Helm 3.3 release milestone.So someone should write-up my first sentence in a way that is actually consumable, and add it somewhere in the docs. Or write an even clearer description with examples and caveats.
We should also documenting the automatically-added
meta.helm.sh
annotations, and that theapp.kubernetes.io/managed-by
label is now added automatically, so doesn't need to be in the Best Practices doc, or perhaps can be called out there as "added by Helm anyway".Per @hickeyma:
If it helps for what to write, there's already a live example of this being used, see https://github.com/aws/eks-charts/tree/master/stable/aws-vpc-cni#adopting-the-existing-aws-node-resources-in-an-eks-cluster and background discussion at https://github.com/aws/eks-charts/issues/57#issuecomment-628403245
Although there's probably nothing that can be done now, this was also missed from the 3.2.0 release notes.