Open wolph opened 4 years ago
The cluster CIDR is 10.43.0.0/16 I opened it up to all for testing. The kube-dns server service was at 10.43.0.10 doing any lookups for cluster domains like podinfo.namespace.svc.cluster.local
without the firewall outbound would fail but if I added it they started working again.
@graytonio FIREWALL_OUTBOUND_SUBNETS
does change routing, so it is a routing+firewall issue, not just a firewall issue. Isn't 10.0.0.0/8
the local network the gluetun pod is part of already? By default Gluetun configures routing and the firewall to allow communication with local subnets, it's strange it did not allow it for the cluster DNS server. Maybe exec in Gluetun ip route
and check the difference with/without FIREWALL_OUTBOUND_SUBNETS=10.0.0.0/8
?
On a related note, DNS_KEEP_NAMESERVER=on
now does not change any DNS settings (on the latest image), although at your own risk since this will have DNS traffic go outside the VPN tunnel. Not a great solution, but a viable one for Kubernetes it seems.
The combination of FIREWALL_OUTBOUND_SUBNETS=10.0.0.0/8
and DNS_PLAINTEXT_ADDRESS=10.43.0.10
has dns resolution for k8s services working for me. Not a fan of hard coding the kube-dns ip here but it does the job.
Going even simpler, using the advice above and some local testing, this also just works for me (using k3s):
env:
- name: DNS_KEEP_NAMESERVER
value: on
- name: FIREWALL_OUTBOUND_SUBNETS
value: 10.43.0.0/16 # https://docs.k3s.io/cli/server#networking
Looking at ip route
in the gluetun
container I see:
<<K9s-Shell>> Pod: debugger/debugger-7d4f4c6bf-42d5w | Container: gluetun
/ # ip route
default via 10.42.0.1 dev eth0
10.42.0.0/24 dev eth0 proto kernel scope link src 10.42.0.172
10.42.0.0/16 via 10.42.0.1 dev eth0
TLDR: Kubernetes services cannot be resolved anymore because the DNS configuration is being overwritten
Is this urgent?
What VPN service provider are you using?
What's the version of the program?
Running version latest built on 2020-07-09T11:57:17Z (commit dc1c7ea)
What are you using to run the container?
Extra information
Logs:
Configuration file:
Host OS: DigitalOcean Kubernetes cluster
I believe that
svc.cluster.local
should be added to thesearch
parameter in/etc/resolv.conf
and that unbound needs to use the internal k8s dns server to resolve those local domainnames.Running in a normal pod:
Running with the VPN sidecar:
Running from the VPN sidecar: