Closed bbaumgartl closed 2 years ago
@bbaumgartl thanks for creating this issue! I guess since we are exposing coredns at 1053 instead of 53 this makes problems (check https://github.com/loft-sh/vcluster/blob/main/manifests/coredns/coredns.yaml#L136). Does it work if you allow port 1053?
I was looking at the service and didn't see that the port is different for the deployment/pods.
Out of curiosity is there a reason that the deployment uses a different port?
@bbaumgartl yes the reason is that we can run coredns as non root without any capabilities, but for this to work it cannot listen on any 1-1024 ports
What happened?
After applying an egress network policy to pods they can't do requests to the vcluster coredns anymore.
This can not be mitigated by adding ports 53/udp/tcp to the egress rule.
Other ports like 80, 443 work.
It seems that this is not a port problem but something to do with the internal core dns routing because changing the
/etc/resolv.conf
inside the container tonameserver 1.1.1.1
works (for external domains).What did you expect to happen?
DNS requests to the internal coredns should not be blocked, or allowable by an egress rule.
How can we reproduce it (as minimally and precisely as possible)?
Create vcluster with
values.yml
:Create a
test
container:Exec into
test
container and test dns withdig test
ordig google.de
.Add network policy and test it again:
Anything else we need to know?
Network policies work on the host cluster.
The CNI is Canal.
We tried different egress network policies:
10.0.0.0/8
and portsHost cluster Kubernetes version
Host cluster Kubernetes distribution
vlcuster version
Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)
OS and Arch