which restricts the admission-webhook to apply only to namespaces which have been created by kubeflow-profiles for users. Currently, we have this webhook to apply to all pods in all namespaces. I think this configuration is why if we break our admission-webook workload (eg: delete the deployment, but do not delete the mutatingwebhook that uses the deployment), it blocks all pods on the cluster from deploying.
(remove objectSelector, add namespaceSelector). Strictly speaking, the objectSelector could stay, but I believe it is unnecessary with the namespaceSelector added.
To test this:
juju deploy admission-webhook
create a namespace (note that if the selectors are changed to a namespace selector as described above, we must have the correct label in the namespace metadata)
apiVersion: v1
kind: Namespace
metadata:
name: user
labels:
app.kubernetes.io/part-of: kubeflow-profile
create a PodDefault, for example (the content of this poddefault doesn't matter - this was taken from our kfp charm):
An interesting additional test could be creating a pod in an unlabeled namespace (eg: one that shouldn't be in-scope for the mutating webhook) to make sure the poddefault is not applied
Upstream Kubeflow applies a namespace selector in their manifests of:
which restricts the admission-webhook to apply only to namespaces which have been created by kubeflow-profiles for users. Currently, we have this webhook to apply to all pods in all namespaces. I think this configuration is why if we break our admission-webook workload (eg: delete the deployment, but do not delete the
mutatingwebhook
that uses the deployment), it blocks all pods on the cluster from deploying.We can adopt upstream's configuration by changing our charm's mutatingwebhookconfiguration from:
to:
(remove
objectSelector
, addnamespaceSelector
). Strictly speaking, theobjectSelector
could stay, but I believe it is unnecessary with thenamespaceSelector
added.To test this:
juju deploy admission-webhook
label
in the namespace metadata)PodDefault
, for example (the content of this poddefault doesn't matter - this was taken from our kfp charm):access-ml-pipeline: "true"
:exec a shell in the above pod (
kubectl exec -it testpod -- bash
) and confirm that/var/run/secrets/kubeflow/pipelines/token
existsSomething like the procedure above should be added as an integration test so we confirm
PodDefault
s actually work correctly