Closed jonathan-innis closed 3 months ago
How about using a ValidatingAdmissionPolicy with a custom, singleton params kind, and then having controllers write to that custom resource based on a watch for their node class (or whatever).
We'd make the extra CRD part of the CRDs chart, leaving the controller to actually add make an instance of that CR and / or add itself in.
It's more to implement, but as a pattern it leaves room for multiple providers in one cluster, and it avoids [should avoid] the risk of different implementations clashing over CRD writes.
This is an outline, let me know if folks want details clarified.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This should be fixed in v1. We're doing validation for Group, but aren't doing validation for the others (name, kind)
Description
What problem are you trying to solve?
Right now, there's no CEL or webhook validation for what's passed through the NodeClassReference. This means that it can be easy to forget the version portion or not properly capitalize kinds, etc. when specifying the reference.
It probably also makes sense for the cloudprovider to inject its own CEL validation when it pulls the NodePool CRD into its helm chart (similar to what AWS does here in scripts: https://github.com/aws/karpenter-provider-aws/tree/main/hack/validation)