Open chmodrs opened 1 year ago
From this comment, NodeLocal DNS cache will read the kube-dns
ConfigMap, but will not read the coredns
ConfigMap:
https://github.com/kubernetes/dns/issues/452#issuecomment-865206337
Running CoreDNS with NodeLocal DNS cache seems like a pretty typical use case though, so it would be nice if this was supported.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Is there any progress on that issue? I'm having the same problem,how did you end up solving the problem @chmodrs
/remove-lifecycle stale
I fixed the problem, which was caused by a configuration error When these statements are executed, the resulting corefile is like
apiVersion: v1
data:
Corefile: |
cluster.local:53 {
...
forward . 172.20.0.10 {
force_tcp
}
prometheus :9253
health 169.254.20.10:8080
}
this cause only cluster.local domain names will be forwarded to coredns, the rest of the domain names will be resolved locally, so this error is reported
If you want to solve this problem, you can configure the domain of the custom domain name into the Corefile, and like that, the problem is solved
apiVersion: v1
data:
Corefile: |
rtm cluster.local:53 {
....
}
@kilsilent: You can't close an active issue/PR unless you authored it or you are a collaborator.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hi everyone,
What happened? In EKS 1.22, after installing nodelocaldns, in-cluster DNS and external DNS works normally, but custom hosts stored in coredns configmap, stop working.
What did you expect to happen? I expect that even with nodelocaldns, custom hosts continue to work normally.
How can we reproduce it (as minimally and precisely as possible)?
hosts custom.hosts mydns.rtm { 8.8.4.4 mydns.rtm fallthrough }
kubectl exec -it nginx2-55764b6d95-dvvdh -- ping mydns.rtm
PING mydns.rtm (8.8.4.4): 56 data bytes 64 bytes from 8.8.4.4: seq=0 ttl=105 time=2.401 ms 64 bytes from 8.8.4.4: seq=1 ttl=105 time=1.158 ms 64 bytes from 8.8.4.4: seq=2 ttl=105 time=1.286 ms
!/bin/sh
echo "Downloading nodelocaldns" wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.22.17/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml -O manifests/nodelocaldns.yaml
echo "replacing values" sed -i '' -e 's/PILLARDNSDOMAIN/cluster.local/g' manifests/nodelocaldns.yaml sed -i '' -e 's/PILLARDNSSERVER/172.20.0.10/g' manifests/nodelocaldns.yaml ### kubectl -n kube-system get service kube-dns sed -i '' -e 's/PILLARLOCALDNS/169.254.20.10/g' manifests/nodelocaldns.yaml ### default nodelocadns ip address
kubectl exec -it nginx2-55764b6d95-dvvdh -- ping mydns.rtm
ping: bad address 'mydns.rtm' command terminated with exit code 1