Closed ionysos closed 5 years ago
As discussed one solution could be that we annotate (annotation: don't use Karydia) pods that are already in the system.
I provided a temporary workaround (#187) for this issue but we have to keep this in mind and, thus, have to clean these workaround things up with the final solution.
Implementation idea: use webhook filtering by label and namespace
As described at the Kubernetes (K8s) docs (v1.15; 2019/08/23):
In addition to the matching request rules
, it is now possible to add objectSelector
and namespaceSelector
blocks to our webhook configurations to gain an additional kind of filtering directly on webhook level which would decrease the load / requests to karydia.
But this approach does NOT exclude either objects or namespaces by name instead it excludes them via their labels. This means it is a very similar approach to our current exclusion handling via annotations but it is not on pod / container / karydia level but on webhook level.
Description
If I try to delete or to perform a rolling update of karydia it fails with:
Steps to reproduce
kubectl apply -f ./install/helm-service-account.yaml
helm init --service-account tiller
helm install ./install/charts --name karydia
kubectl delete pod $(kubectl get pods -n kube-system -l app=karydia -o jsonpath='{.items[0].metadata.name}') -n kube-system
kubectl get pod $(kubectl get pods -n kube-system -l app=karydia -o jsonpath='{.items[0].metadata.name}') -n kube-system
ORkubectl describe pod $(kubectl get pods -n kube-system -l app=karydia -o jsonpath='{.items[0].metadata.name}') -n kube-system
Expected behavior
This should work without any issues. The (new) karydia pod should get status
Running
, as well.Environment
kubectl version
): v1.15.2