kubewarden / kubewarden-controller

Manage admission policies in your Kubernetes cluster with ease
https://kubewarden.io
Apache License 2.0
191 stars 33 forks source link

`ClusterAdmissionPolicy` reconciliation loop: handle policies that refence non-existing PolicyServer #72

Closed flavio closed 3 years ago

flavio commented 3 years ago

It's possible for the user to create a ClusterAdmissionPolicy object that refers a non-existing PolicyServer.

When that happens, the status of the ClusterAdmissionPolicy resource should report this issue.

ereslibre commented 3 years ago

I think we can take a similar approach to what CustomResourceDefinitions do. Our status field looks like this currently:

https://github.com/kubewarden/kubewarden-controller/blob/f765da1e465ee8a4557e2810941fce16f3be61d5/apis/policies/v1alpha2/clusteradmissionpolicy_types.go#L167-L184

I think we can add a condition type PolicyServerBound with status "True" or "False". Also, I think we can add another status field called policyServerBound that is a boolean. This is false by default (possibly defaulted by a webhook), and only set to true by a reconciliation loop that ensures the bound policy server exists.

Conditions are good for using kubectl wait, but any third party using client-go or a similar client can watch the resource and wait until the policy.status.policyServerBound is true.

flavio commented 3 years ago

Is there a way to replicate the UX the user have when they create a Pod that uses a non-existing volume mount? Something that says "pending..."

ereslibre commented 3 years ago

Yes, this would be achieved inside the status struct with a policyStatus field that contains a string, and internally is filled from a go "enum". This policyStatus could have different valid values: unscheduled, unschedulable, pending (meaning - the policy server exists, reconciling all resources) and active. If we go this way, we can remove the policyActive bool from status, it would get superseded by this string field.

A webhook would ensure that every policy created starts with an unscheduled state for example.

On the reconciliation loop, if no policy server exists with the given name, it would set the policyStatus to unschedulable on that cluster admission policy. Otherwise, it would set it to pending. When the reconciliation loop ends for that policy, if everything was reconciled correctly, it can set policyStatus to active.

If we show this field on the table (doable with kubebuilder that relies on apimachinery), then when you do kubectl get clusteradmissionpolicy you would get the policy and the status as another column in the table.

raulcabello commented 3 years ago

@ereslibre I like this approach. Should we update https://github.com/kubewarden/kubewarden-controller/issues/69 to set the policyStatus to unscheduled when a clusteradmissionpolicy is created?

ereslibre commented 3 years ago

@ereslibre I like this approach. Should we update #69 to set the policyStatus to unscheduled when a clusteradmissionpolicy is created?

Good idea, let me know if you need anything from me :)

flavio commented 3 years ago

I like the proposal too, I think we just need the unschedulable, pending and active status. I don't think we need the unsheduled status.

viccuad commented 3 years ago

Agree with just using unschedulable, pending, and active.

ereslibre commented 3 years ago

I think we have to default the status to a transient value, if we don't introduce unscheduled, then there's no way of telling the difference between "pending to confirm whether the policy server exists" and "everything is fine, reconciling all resources", and I think those two states are different.

ereslibre commented 3 years ago

Say unscheduled or any other value, I don't care about the names, just about the states and transitions we go through :)

viccuad commented 3 years ago

Closed by https://github.com/kubewarden/kubewarden-controller/pull/76.