kubewarden / kubewarden-controller

Manage admission policies in your Kubernetes cluster with ease
https://kubewarden.io
Apache License 2.0
191 stars 33 forks source link

PolicyServer does not detect verificationConfig removal #903

Open kravciak opened 2 weeks ago

kravciak commented 2 weeks ago

When I delete key policyServer.verificationConfig=null on helm chart setting is not applied.

From helm documentation setting to null is right way to delete key.

# disabling verification config has no effect
~ helmer set kubewarden-defaults --set policyServer.verificationConfig=null
~ helm get values kubewarden-defaults -n kubewarden
policyServer: {}

# workaround can be used
~ helmer set kubewarden-defaults --set policyServer.verificationConfig=""
~ helm get values kubewarden-defaults -n kubewarden
policyServer:
  verificationConfig: ""

To reproduce:

CONFIGMAP_NAME="ssc-verification-config"

# create verification config map
~ kubectl -n kubewarden create configmap $CONFIGMAP_NAME --from-file=verification-config=<(kwctl scaffold verification-config)

# enable verification config
~ helm upgrade kubewarden-defaults kubewarden/kubewarden-defaults \
  --set policyServer.verificationConfig=$CONFIGMAP_NAME --reuse-values -n kubewarden

# disable verification config (use latest image to force PS redeployment)
~ helm upgrade kubewarden-defaults kubewarden/kubewarden-defaults \
  --set policyServer.verificationConfig=null --set policyServer.image.tag=latest --reuse-values -n kubewarden

# delete unused config map
~ kubectl delete cm -n kubewarden $CONFIGMAP_NAME

# policy can't be created because of missing verification config map
~ kwctl scaffold  manifest --type ClusterAdmissionPolicy registry://ghcr.io/kubewarden/policies/pod-privileged:v0.3.2 | kubectl apply -f -

~ kubectl describe pod -n kubewarden policy-server-default-68d8d5b9fb-4mx87
Warning  FailedMount  7s (x5 over 14s)  kubelet            MountVolume.SetUp failed for volume "verification" : configmap "ssc-verification-config" not found

# new policy server creation hangs on missing cm
viccuad commented 2 hours ago

I can't reproduce here. Could it be that the end-to-end tests call always with --reuse-values, or one-too-many --reuse-values was used? I took care to not pass --reuse-values and set this field as null, and it got correctly reconciled:

click here ```console $ kubectl create configmap my-signatures-configuration --from-file=verification-config=my-verification-config.yml $ helm upgrade -i kubewarden-defaults kubewarden/kubewarden-defaults \ --set policyServer.verificationConfig=my-signatures-configuration --reuse-values -n kubewarden $ helm upgrade -i kubewarden-defaults kubewarden/kubewarden-defaults -n kubewarden $ helm -n kubewarden get values kubewarden-defaults USER-SUPPLIED VALUES: policyServer: verificationConfig: my-signatures-configuration $ helm upgrade -i kubewarden-defaults kubewarden/kubewarden-defaults -n kubewarden --set policyServer.verificationConfig=null Release "kubewarden-defaults" has been upgraded. Happy Helming! NAME: kubewarden-defaults LAST DEPLOYED: Thu Oct 24 16:45:35 2024 NAMESPACE: kubewarden STATUS: deployed REVISION: 3 TEST SUITE: None NOTES: You now have a `PolicyServer` named `default` running in your cluster. It is ready to run any `clusteradmissionpolicies.policies.kubewarden.io` or `admissionpolicies.policies.kubewarden.io` resources. For more information check out https://docs.kubewarden.io/quick-start. Discover ready to use policies at https://artifacthub.io/packages/search?kind=13. $ helm -n kubewarden get values kubewarden-defaults USER-SUPPLIED VALUES: policyServer: verificationConfig: null $ kubectl get policyservers -o yaml apiVersion: v1 items: - apiVersion: policies.kubewarden.io/v1 kind: PolicyServer metadata: annotations: meta.helm.sh/release-name: kubewarden-defaults meta.helm.sh/release-namespace: kubewarden creationTimestamp: "2024-10-24T14:39:33Z" finalizers: - kubewarden.io/finalizer generation: 2 labels: app.kubernetes.io/component: policy-server app.kubernetes.io/instance: kubewarden-defaults app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: kubewarden-defaults app.kubernetes.io/part-of: kubewarden app.kubernetes.io/version: v1.17.0 helm.sh/chart: kubewarden-defaults-2.4.0 name: default resourceVersion: "1417" uid: 3a756fb0-8605-4b3a-ab95-de98a8141c80 spec: affinity: {} env: - name: KUBEWARDEN_LOG_LEVEL value: info image: ghcr.io/kubewarden/policy-server:v1.17.0 replicas: 1 securityContexts: {} serviceAccountName: policy-server status: conditions: - lastTransitionTime: "2024-10-24T14:39:33Z" message: "" reason: ReconciliationSucceeded status: "True" type: CertSecretReconciled - lastTransitionTime: "2024-10-24T14:39:33Z" message: "" reason: ReconciliationSucceeded status: "True" type: ConfigMapReconciled - lastTransitionTime: "2024-10-24T14:39:33Z" message: "" reason: ReconciliationSucceeded status: "True" type: PodDisruptionBudgetReconciled - lastTransitionTime: "2024-10-24T14:39:34Z" message: "" reason: ReconciliationSucceeded status: "True" type: DeploymentReconciled - lastTransitionTime: "2024-10-24T14:39:34Z" message: "" reason: ReconciliationSucceeded status: "True" type: ServiceReconciled kind: List metadata: resourceVersion: "" ```