Open dee0sap opened 4 weeks ago
Another observation, I after removing the problematic webhooks ( see original description ), a lease that seemed to be problematic, and then performing a rolling restart of the admission-controller I saw a different error in the admission-controller log
2024-10-24T15:46:56Z INFO webhooks.server logging/log.go:184 2024/10/24 15:46:56 http: TLS handshake error from 10.250.0.135:58636: secret "kyverno-svc.kyverno.svc.kyverno-tls-pair" not found
Checking the secrets in the vcluster it does appear to be missing. I haven't checked to see if it was missing from the very beginning or not
kubectl get -A secret
NAMESPACE NAME TYPE DATA AGE
default dockersecret kubernetes.io/dockerconfigjson 1 11h
kyverno kyverno-cleanup-controller.kyverno.svc.kyverno-tls-ca kubernetes.io/tls 2 11h
kyverno kyverno-cleanup-controller.kyverno.svc.kyverno-tls-pair kubernetes.io/tls 2 29m
kyverno kyverno-svc.kyverno.svc.kyverno-tls-ca kubernetes.io/tls 2 11h
kyverno sh.helm.release.v1.kyverno.v1 helm.sh/release.v1 1 11h
What happened?
I ran helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace -f scripts/kyverno-overrides.yaml to install kyverno in the vcluster.
The admission-controller pod fails to start. I believe the problem is that is unable to list configmaps.
What did you expect to happen?
I expect the kyverno deployments to run without issue
How can we reproduce it (as minimally and precisely as possible)?
I believe creating a vcluster and deploying kyverno is all that is required to recreate the problem.
Anything else we need to know?
kubectl get -A configmap
didn't have a problem. Alsokubectl auth whoami
confirmed as was running as the admission controller service account.Host cluster Kubernetes version
vcluster version
VCluster Config