kubewarden / kubewarden-controller

Manage admission policies in your Kubernetes cluster with ease
https://kubewarden.io
Apache License 2.0
191 stars 33 forks source link

Policy-server-default pod is not recreated after deletion #326

Closed kravciak closed 1 year ago

kravciak commented 2 years ago

Is there an existing issue for this?

Current Behavior

Default policy server pod is not recreated (deleted it manually) when there are clusteradmissionpolicies running.

Expected Behavior

Pod is recreated when I delete it manually. It should not depend on existing clusteradmissionpolicies.

Steps To Reproduce

Create kubewarden cluster with recommendedPolicies.enabled=True and delete policy-server pod.

# Install fresh kubewarden (crds, controller, defaults)
helm install --create-namespace -n kubewarden kubewarden-crds kubewarden/kubewarden-crds
helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller
helm install --wait -n kubewarden kubewarden-defaults kubewarden/kubewarden-defaults \
    --set recommendedPolicies.enabled=True

# Delete policyserver pod, it won't be recreated
kubectl delete pod -n kubewarden policy-server-default-86c5787d45-ldrpn
kubectl get pods -n kubewarden
    kubewarden-controller-759774675b-prwk7   1/1     Running   0          5m23s

# Delete cluster admission policies and policyserver pod will start again
kubectl delete clusteradmissionpolicies --all
kubectl get pods -n kubewarden
    kubewarden-controller-759774675b-prwk7   1/1     Running   0          6m29s
    policy-server-default-d8d6bdd9c-cnm4p    1/1     Running   0          54s

Environment

No response

Anything else?

No response

flavio commented 2 years ago

@kravciak thanks for submitting this issue.

Some questions:

kravciak commented 2 years ago

Does your cluster have some policy CR defined?

Just defaults that come with installation, I did not crate any policies. Existing clusteradmissionpolicies went from active into pending state when I deleted policyserver pod.

ε k get customresourcedefinitions | grep kubewarden
policyservers.policies.kubewarden.io              2022-10-24T13:35:34Z
admissionpolicies.policies.kubewarden.io          2022-10-24T13:35:34Z
clusteradmissionpolicies.policies.kubewarden.io   2022-10-24T13:35:34Z

ε k get clusteradmissionpolicies.policies.kubewarden.io 
NAME                        POLICY SERVER   MUTATING   MODE      OBSERVED MODE   STATUS
no-host-namespace-sharing   default         false      monitor   monitor         pending
no-privilege-escalation     default         true       monitor   monitor         pending
no-privileged-pod           default         false      monitor   monitor         pending
do-not-run-as-root          default         true       monitor   monitor         pending
do-not-share-host-paths     default         false      monitor   monitor         pending
drop-capabilities           default         true       monitor   monitor         pending

ε k get admissionpolicies.policies.kubewarden.io -A
No resources found

What is the status of the Deployment that controls the default policy

ε k get deploy -n kubewarden policy-server-default -o yaml
...
message: 'Internal error occurred: failed calling webhook "clusterwide-do-not-run-as-root.kubewarden.admission":
      failed to call webhook: Post "https://policy-server-default.kubewarden.svc:8443/validate/clusterwide-do-not-run-as-root?timeout=10s":
      no endpoints available for service "policy-server-default"'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure

I create a policy? Is the Pod recreated?

When I create policy it stays in pending state. Pod is not created.

If your cluster has no policies defined..

Pod is recreated as expected.

raulcabello commented 1 year ago

This seems to be a bug in the controller. I reproduced the issue.