Open ricoberger opened 2 months ago
This is a bug. Unfortunately Kubernetes native type schemas do not include information for how the resource should be validated.
We can workaround this for now until they are populated by hardcoding them for the embedded schemas, since they do not change except for new resources
/assign
Hi @alexzielenski,
I'm also running into this issue and I've been trying to create a workaround, but can't seem to make it work.
My idea was to write a schema patch:
{
"components": {
"schemas": {
"io.k8s.api.rbac.v1.ClusterRole": {
"properties": {
"metadata": {
"allOf": [
{
"$ref": "#/components/schemas/CustomObjectMeta"
}
]
}
}
},
"CustomObjectMeta": {
"properties": {
"name": {
"type": "string"
}
},
"x-kubernetes-validations": [
{
"rule": "1 == 2"
}
]
}
}
}
}
When I run kubectl validate
with this patch, I see the new validation rule is appended, but does not replace the validation of metadata
; the "lowercase RFC 1123 subdomain" validation is still applied, even though #/components/schemas/CustomObjectMeta
is a new schema.
Is it possible for me to write a temporary workaround or should this be fixed in kubectl-validate
instead (in which case, I would be happy to help)?
Thanks in advance!
What happened?
The validation of ClusterRoles with the
system:
prefix as used by the Vertical Pod Autoscaler in the name fails:What did you expect to happen?
The validation for ClusterRoles with the
system:
prefix in the name shouldn't fail.How can we reproduce it (as minimally and precisely as possible)?
Save the following yaml as
vpa-actor.yaml
file and validate it withkubectl validate vpa-actor.yaml
Anything else we need to know?
No response
Kubernetes version