Open beffe123 opened 1 year ago
Thanks for opening your first issue here! Be sure to follow the issue template!
Note that this port is only used for liveness checks. It is already possible to configure the Service port used which services webhook requests.
I know that the service port is configurable. However, at least on GKE, control plane sends requests directly to the endpoints of the service, which is using port 9443. Looking at the servcie definition of kyverno-svc, the target port is the named port https which is in fact 9443. So it is not only used for liveness checks.
I should have noted that I'm using helm to install kyverno.
Seems already done: https://github.com/kyverno/kyverno/pull/6118
@sepich please see my previous comment. #6118 does not fix it. The solution would be to change the targetPort
- not port
. As I mentioned, on GKE the control plane sends requests directly to the endpoints of the service which use the targetPort
. The port of the service actually doesn't matter at all. And an important detail is to change the listening port of the webhook from 9443 to something configurable. That has to be used in maybe three places:
Problem Statement
On Google GKE private clusters by default traffic from the control plane to port 9443 on the worker nodes is not permitted. This is addressed on the troubleshooting page: https://kyverno.io/docs/troubleshooting/#kyverno-fails-on-gke
However in some cases it is hard to implent the firewall rule. E.g. on a setup with shared VPC it has to be implemented in the project that hosts the shared VPC.
Solution Description
An easy solution would be to make the linstening port of the webhook container configurable and change it to port 10250. Port 10250 is open by default because it is used among other things by the GKE-pre-installed Prometheus webhook. See the Google docs here: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters?hl=de#add_firewall_rules
Alternatives
No response
Additional Context
No response
Slack discussion
No response
Research