Closed mschreiber-npo closed 3 months ago
Hey :wave: This was fixed (feature reverted) in v5.7.1 (see #1462)
If you think this is a different issue or you're still facing this with v5.7.1, feel free to reopen :+1:
Hey 👋 This was fixed (feature reverted) in v5.7.1 (see #1462)
If you think this is a different issue or you're still facing this with v5.7.1, feel free to reopen 👍
Thank you very much! I'll try again with the newer Version ! Sorry for the inconvenience!
DNS-Resolution of external domains not working inside pods
Since today (2024-07-08) I've a problem with DNS-Resolution inside pods running in a k3d-cluster.
But as soon as I comment-out the
file
-plugin inside thecoredns-custom
ConfigMap the problem goes away. But without thefile
-plugin the dns-resolution ofhost.k3d.internal
cannot work.K3D Version:
The Problem:
I can't resolve any external domain, here is an example with
google.com
But
host.k3d.internal
is working just fine (which means coredns is doing it's thing):How to "fix" the Problem:
After a bunch of try and error (because i really don't know
coredns
that well). It turned out that the problem seams to be rooted inside thecoredns
configuration. When I remove thefile
-plugin inside thecoredns-custom
-ConfigMap it is working again:** server can't find google.com: NXDOMAIN
command terminated with exit code 1
❯ kubectl -n kube-system describe cm coredns-custom Name: coredns-custom Namespace: kube-system Labels: objectset.rio.cattle.io/hash=a3e4960ef9f39950a366d81f48be07a01f218c1e Annotations: objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yPQevTQBBHv8oy52SzaWpsAoJ/PImoB8GTl8nuJF2TzJSdbURKv7sERQSp/o/D8Hu8dwO8xM+UNApDD1sNBQTMCP0NMISYozAuZWC1YYAeXpumdc68/WA+fXwyaJ... objectset.rio.cattle.io/id: objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon objectset.rio.cattle.io/owner-name: coredns-custom objectset.rio.cattle.io/owner-namespace: kube-system
Data
additional-dns.db:
@ 3600 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2024061200 1800 900 604800 86400 host.k3d.internal IN A 172.21.0.1 k3d-test-server-0 IN A 172.21.0.2 k3d-test-serverlb IN A 172.21.0.3
hosts.override:
file /etc/coredns/custom/additional-dns.db
BinaryData
Events:
❯ kubectl -n kube-system edit cm coredns-custom configmap/coredns-custom edited
❯ kubectl -n kube-system describe cm coredns-custom Name: coredns-custom Namespace: kube-system Labels: objectset.rio.cattle.io/hash=a3e4960ef9f39950a366d81f48be07a01f218c1e Annotations: objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yPQevTQBBHv8oy52SzaWpsAoJ/PImoB8GTl8nuJF2TzJSdbURKv7sERQSp/o/D8Hu8dwO8xM+UNApDD1sNBQTMCP0NMISYozAuZWC1YYAeXpumdc68/WA+fXwyaJ... objectset.rio.cattle.io/id: objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon objectset.rio.cattle.io/owner-name: coredns-custom objectset.rio.cattle.io/owner-namespace: kube-system
Data
additional-dns.db:
@ 3600 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2024061200 1800 900 604800 86400 host.k3d.internal IN A 172.21.0.1 k3d-test-server-0 IN A 172.21.0.2 k3d-test-serverlb IN A 172.21.0.3
hosts.override:
file /etc/coredns/custom/additional-dns.db
BinaryData
Events:
❯ kubectl -n kube-system rollout restart deployment coredns deployment.apps/coredns restarted
❯ kubectl exec -i -t dnsutils -- nslookup google.com Server: 10.43.0.10 Address: 10.43.0.10#53
Non-authoritative answer: Name: google.com Address: 172.217.16.206
❯ k3d cluster create test INFO[0000] Prep: Network INFO[0000] Created network 'k3d-test' INFO[0000] Created image volume k3d-test-images INFO[0000] Starting new tools node... INFO[0000] Starting node 'k3d-test-tools' INFO[0001] Creating node 'k3d-test-server-0' INFO[0001] Creating LoadBalancer 'k3d-test-serverlb' INFO[0001] Using the k3d-tools node to gather environment information INFO[0001] HostIP: using network gateway 172.21.0.1 address INFO[0001] Starting cluster 'test' INFO[0001] Starting servers... INFO[0001] Starting node 'k3d-test-server-0' INFO[0005] All agents already running. INFO[0005] Starting helpers... INFO[0005] Starting node 'k3d-test-serverlb' INFO[0011] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... INFO[0013] Cluster 'test' created successfully! INFO[0013] You can now use it like this: kubectl cluster-info
❯ kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml pod/dnsutils created
❯ kubectl exec -i -t dnsutils -- nslookup google.com Server: 10.43.0.10 Address: 10.43.0.10#53
** server can't find google.com: NXDOMAIN
❯ kubectl exec -i -t dnsutils -- nslookup host.k3d.internal Server: 10.43.0.10 Address: 10.43.0.10#53
Name: host.k3d.internal Address: 172.21.0.1
❯ kubectl -n kube-system edit cm coredns configmap/coredns edited
❯ kubectl -n kube-system describe cm coredns Name: coredns Namespace: kube-system Labels: objectset.rio.cattle.io/hash=bce283298811743a0386ab510f2f67ef74240c57 Annotations: objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQ... objectset.rio.cattle.io/id: objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon objectset.rio.cattle.io/owner-name: coredns objectset.rio.cattle.io/owner-namespace: kube-system
Data
Corefile:
.:53 { errors health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } hosts /etc/coredns/NodeHosts { ttl 60 reload 15s fallthrough } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance import /etc/coredns/custom/.override } import /etc/coredns/custom/.server
NodeHosts:
172.21.0.2 k3d-test-server-0 172.21.0.1 host.k3d.internal 172.21.0.2 k3d-test-server-0 172.21.0.3 k3d-test-serverlb
BinaryData
Events:
❯ kubectl -n kube-system rollout restart deployment coredns deployment.apps/coredns restarted
❯ kubectl exec -i -t dnsutils -- nslookup google.com Server: 10.43.0.10 Address: 10.43.0.10#53
Non-authoritative answer: Name: google.com Address: 142.250.181.238
❯ kubectl exec -i -t dnsutils -- nslookup host.k3d.internal Server: 10.43.0.10 Address: 10.43.0.10#53
Name: host.k3d.internal Address: 172.21.0.1