Open marcofranssen opened 1 month ago
Turns out this is controller by
https://github.com/aws/karpenter-provider-aws/blob/main/charts/karpenter-crd/values.yaml#L5
This is something that should be made very clear in the documentation or migration guide.
It isn't mentioned here https://aws.amazon.com/blogs/containers/announcing-karpenter-1-0/
@marcofranssen We just stumbled over this as well. At best, this could be by default the namespace of the helm chart deployment.
yes indeed. that would even be better
I encountered the same problem using ArgoCD. The karpenter-crd
chart is installed correctly in the karpenter
namespace. But the karpenter
chart also installs the same CRDs (using ArgoCD) but wants to change it to kube-system
namespace.
An option to disable installing CRDs using the karpenter
chart can help.
I encountered the same problem using ArgoCD. The
karpenter-crd
chart is installed correctly in thekarpenter
namespace. But thekarpenter
chart also installs the same CRDs (using ArgoCD) but wants to change it tokube-system
namespace. An option to disable installing CRDs using thekarpenter
chart can help.
You can already do so by --skip-crds
.
@marcofranssen using that value only works if you install the "karpenter-crd" helm chart, if you use the "normal" one, they are symlinked and replacement does not happen. (https://github.com/aws/karpenter-provider-aws/tree/28da0b96b6086679f75e656d31ac65bd7fca2bc0/charts/karpenter)
Looks like the modification is made with a post job hook, and the referenced image digest is not compatible with ARM
Related to #6819 #6765
Related to #6544
Seems to be resolved in #6827
There's some additional justification given here, but if you need to template the CRDs (required if installing Karpenter outside of kube-system
or customizing the service), you will need to use the karpenter-crd
chart. This is why the karpenter-crd chart is used in the v1 migration guide, with an alternative of manually patching the CRDs.
As far as what's missing from our docs, I do think we can be a little more explicit. We do instruct users to install the CRD chart as part of the installation guide, but without a justification of why I can understand why users would continue to just use the standard chart since it already includes the CRDs, just without templating.
As far as what's missing from our docs, I do think we can be a little more explicit. We do instruct users to install the CRD chart as part of the installation guide, but without a justification of why I can understand why users would continue to just use the standard chart since it already includes the CRDs, just without templating.
@jmdeal - In our case, in an ArgoCD environment, it's not the lack of clear instructions or justifications for using the karpenter-crd
chart ... it's that you cannot deploy the CRDs from the karpenter-crd
at all over ArgoCD managed resources, because of .Release.Service
/ managed-by
and release-name
stamping. You get fun errors like:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "ec2nodeclasses.karpenter.k8s.aws" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "karpenter-crd"
So you're stuck with the situation of having to figure it out yourself and potentially dealing with the Application construction as we are discussing in #6847.
For now, "fixing" it with an override on top of the helm chart. Because the validation in the webhook is still useful for me on older Kubernetes versions.
From Tanka with ❤️.
{
local conversionWebhookSpecMixin = {
spec+: {
conversion+: {
webhook+: {
clientConfig+: {
service+: {
namespace: c.karpenter.namespace,
},
},
},
},
},
},
karpenter: helm.template(releaseName, './vendor/charts/karpenter', {
apiVersions: apiVersions,
includeCrds: includeCrds,
kubeVersion: kubeVersion,
namespace: c.karpenter.namespace,
noHooks: noHooks,
values: values,
}) + {
// https://github.com/aws/karpenter-provider-aws/issues/6818
'custom_resource_definition_ec_2nodeclasses.karpenter.k_8s.aws'+: conversionWebhookSpecMixin,
'custom_resource_definition_nodeclaims.karpenter.sh'+: conversionWebhookSpecMixin,
'custom_resource_definition_nodepools.karpenter.sh'+: conversionWebhookSpecMixin,
},
}
Seems to be backported to v1.0.2. See https://github.com/aws/karpenter-provider-aws/pull/6855, https://github.com/aws/karpenter-provider-aws/pull/6849 and then there are possibly several other open issues at least partially related to this issue and fix. Right?
Description
Observed Behavior:
When upgrading our karpenter to the v1.0.0 chart it fails at the conversion webhook which is targeting the
kube-system
namespace. Our karpenter is deployed in thekarpenter
namespace.controller logs
Expected Behavior:
The conversion webhook targets the Helm Release namespace.
Reproduction Steps (Please include YAML):
Install Karpenter in the karpenter namespace using the release prior to v1.0.0. Then upgrade karpenter to v1.0.0.
Versions:
Chart Version: v1.0.0
Kubernetes Version (
kubectl version
): 1.30Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment