aws / containers-roadmap

This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).
https://aws.amazon.com/about-aws/whats-new/containers/
Other
5.17k stars 313 forks source link

[EKS] [request]: Nodelocal DNS Cache #303

Open BrianChristie opened 5 years ago

BrianChristie commented 5 years ago

Tell us about your request I would like an officially documented and supported method for installing the Kubernetes Node Local DNS Cache Addon.

Which service(s) is this request for? EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? Kubernetes clusters with a high request rate often experience high rates of failed DNS lookups. For example this affects us when using the AWS SDKs particularly with Alpine / musl-libc containers.

The Nodelocal DNS Cache aims to resolve this (together with kernel patches in 5.1 to fix a conntrack race condition).

Nodelocal DNS Addon

Kubeadm is aiming to support Nodelocal-dns-cache in 1.15. k/k #70707

Are you currently working around this issue? Retrying requests at the application level which fail due to DNS errors.

Additional context Kubernetes DNS Issues include:

Attachments [0] https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html#retryDelayOptions-property [1] https://lkml.org/lkml/2019/2/28/707 [2] https://blog.quentin-machu.fr/2018/06/24/5-15s-dns-lookups-on-kubernetes/ [3] https://www.weave.works/blog/racy-conntrack-and-dns-lookup-timeouts [4] https://www.openwall.com/lists/musl/2015/10/22/15 [5] https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-limits

Vlaaaaaaad commented 4 years ago

After way too much time spent on this I think I got this working. I'd love a review from somebody who knows what they're doing tho 😅 I promise I'll create a pretty blog post or a PR to the GitHub AWS EKS docs.

So, the yaml in k/k/cluster/addons/dns/nodelocaldns cannot just be applied. They need a couple values replaced first:

Not replacing those will lead to the lovely CrashLoopBackOff and much confusion.

Now the question comes up: what to replace those values with? After way too much wasted time I found out that the amazing eksctl already supports Node-Local DNS caches! They do have a very nice PR with a description showing what to replace those values with https://github.com/weaveworks/eksctl/pull/550. TL;DR:

Applying the yaml will work then!

Buuut using netshoot and running kubectl run tmp-shell-no-host-net --generator=run-pod/v1 --rm -i --tty --image nicolaka/netshoot -- /bin/bash and a dig example.com will show that the nodelocal cache is not used.

Running kubectl run tmp-shell-host --generator=run-pod/v1 --rm -i --tty --overrides='{"spec": {"hostNetwork": true}}' --image nicolaka/netshoot -- /bin/bash and a netstat -lntp showed the 169.254.20.10:53 correctly bounded.

The cluster also needs to be changed to have kubelet point to the nodelocal DNS. The Add clusterDNS: 169.254.20.10 to your nodegroup in the cluster config from the eksctl PR linked above.

Unfortunately I was using the Terrafrom community EKS module so this was not as simple. After some research it actually is pretty simple: just add --cluster-dns=169.254.20.10 to kubelet_extra_args which for me led to kubelet_extra_args = "--node-labels=kubernetes.io/lifecycle=spot,nodegroup=mygroup --cluster-dns=169.254.20.10".

Changes got applied, all existing nodes were manually terminated, new nodes came up. Redoing the above checks shows nodelocal is indeed used! 🎉


Now, all that said, I don't know much about networking. Does the above look sane? Can this be run in production? This comment confirms it to be safe in 1.12 even( I highly recommend reading the whole discussion there).

tylux commented 4 years ago

I would also be interested in this feature being fleshed out/supported with EKS

ghostsquad commented 4 years ago

nodelocaldns.yaml file actually has 5 variables which, and I'm confused by the difference between DNSSERVER and CLUSTERDNS

__PILLAR__DNS__DOMAIN__ == cluster.local
__PILLAR__DNS__SERVER__ == ??
__PILLAR__LOCAL__DNS__ == 169.254.20.10
__PILLAR__CLUSTER__DNS__ == <ClusterIP of Kube/CoreDNS service, e.g 172.20.0.10>
__PILLAR__UPSTREAM__SERVERS__ == /etc/resolv.conf
Vlaaaaaaad commented 4 years ago

@ghostsquad that's my bad as I linked to the master version of k/k/cluster/addons/dns/nodelocaldns. I edited the link now to use the 1.16 version.

In master there's currently work happening for a new and improved version of NodeLocal DNS hence the new variables. As far as I know( and there is a high chance I am wrong) that's a work in progress and not yet ready/ released.

ghostsquad commented 4 years ago

Thank you for the response!

jaygorrell commented 4 years ago

I just got this set up myself and it's "working" great -- meaning, DNS requests are going to 169.254.20.10 and getting answers. But I'm not sure if I'm seeing a resolution to the conntrack problems... I still see insert_failed being incremented and latency doesn't seem to be down for repeat requests.

Vlaaaaaaad commented 4 years ago

After way too much time spent on this I think I got this working. I'd love a review from somebody who knows what they're doing tho 😅 I promise I'll create a pretty blog post or a PR to the GitHub AWS EKS docs.

As promised, blog post about this is up on the AWS Containers Blog: EKS DNS at scale and spikeiness!

It's basically my first post here with more details and helpful debugging hints.

raonitimo commented 4 years ago

EKS DNS at scale and spikeiness!

Was this blog post removed?

Anyone still have this?

DZDomi commented 4 years ago

EKS DNS at scale and spikeiness!

Was this blog post removed?

Anyone still have this?

Yes the blog post is not viewable anymore for me too

otterley commented 4 years ago

Hi everyone - you can find instructions for installing the node-local DNS cache here: https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/

Vlaaaaaaad commented 4 years ago

Anyone still have this?

A copy of the blog can be found at: https://www.vladionescu.me/posts/eks-dns.html

cregev commented 4 years ago

Hi everyone - you can find instructions for installing the node-local DNS cache here: https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/

@otterley would you use the instructions you pointed out where the link to set up of the resources of Local DNS cache in Kubernetes leads to this file in the Master Branch : https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

Or from the eks-dns post:

Anyone still have this?

A copy of the blog can be found at: https://www.vladionescu.me/posts/eks-dns.html

Which leads to a different Branch to set up the resources of LocalDNS Cache: https://github.com/kubernetes/kubernetes/blob/release-1.15/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

We are currently running EKS version 1.14. (There is no problem to upgrade to 1.15 if needed)

Vlaaaaaaad commented 4 years ago

Addons are not that tied to the k8s version.

IIRC for NodeLocal DNS there is a pre-1.16 version( which requires kubelet changes) and a post-1.16 version( which requires no kubelet changes). Very high chances I am wrong on this as I haven't kept up to date with the changes.

diffmike commented 4 years ago

Hi, is there a way to use NodeLocal DNS cache with EKS managed node group? In that case it is impossible to set clusterDNS for a node group.

aimanparvaiz commented 3 years ago

After way too much time spent on this I think I got this working. I'd love a review from somebody who knows what they're doing tho 😅 I promise I'll create a pretty blog post or a PR to the GitHub AWS EKS docs.

So, the yaml in k/k/cluster/addons/dns/nodelocaldns cannot just be applied. They need a couple values replaced first:

* `__PILLAR__DNS__DOMAIN__`

* `__PILLAR__LOCAL__DNS__`

* `__PILLAR__DNS__SERVER__`

Not replacing those will lead to the lovely CrashLoopBackOff and much confusion.

Now the question comes up: what to replace those values with? After way too much wasted time I found out that the amazing eksctl already supports Node-Local DNS caches! They do have a very nice PR with a description showing what to replace those values with weaveworks/eksctl#550. TL;DR:

* `__PILLAR__DNS__DOMAIN__` with `cluster.local` as per [amazon-eks-ami's kubelet-config.json](https://github.com/awslabs/amazon-eks-ami/blob/28845f97c05dacaf699a102faa690a4238b79f02/files/kubelet-config.json#L24)

* `__PILLAR__DNS__SERVER__` with ` 10.100.0.10` or `172.20.0.10` depending on your VPC CIDR( yes, really -- check out this awesome `if` in [amazon-eks-ami](https://github.com/awslabs/amazon-eks-ami/blob/ca61cc2bb6ef6fe982cc71ede7552a4a2c6b93e9/files/bootstrap.sh#L167-L170)). Or you could just do a `kubectl -n kube-system get service kube-dns` and check the cluster IP in there

* `__PILLAR__LOCAL__DNS__` with `169.254.20.10` which is like the default address that the nodelocal DNS will bind on each node

Applying the yaml will work then!

Buuut using netshoot and running kubectl run tmp-shell-no-host-net --generator=run-pod/v1 --rm -i --tty --image nicolaka/netshoot -- /bin/bash and a dig example.com will show that the nodelocal cache is not used.

Running kubectl run tmp-shell-host --generator=run-pod/v1 --rm -i --tty --overrides='{"spec": {"hostNetwork": true}}' --image nicolaka/netshoot -- /bin/bash and a netstat -lntp showed the 169.254.20.10:53 correctly bounded.

The cluster also needs to be changed to have kubelet point to the nodelocal DNS. The Add clusterDNS: 169.254.20.10 to your nodegroup in the cluster config from the eksctl PR linked above.

Unfortunately I was using the Terrafrom community EKS module so this was not as simple. After some research it actually is pretty simple: just add --cluster-dns=169.254.20.10 to kubelet_extra_args which for me led to kubelet_extra_args = "--node-labels=kubernetes.io/lifecycle=spot,nodegroup=mygroup --cluster-dns=169.254.20.10".

Changes got applied, all existing nodes were manually terminated, new nodes came up. Redoing the above checks shows nodelocal is indeed used! 🎉

Now, all that said, I don't know much about networking. Does the above look sane? Can this be run in production? This comment confirms it to be safe in 1.12 even( I highly recommend reading the whole discussion there).

I followed the instructions closely but running netshoot and dig example.com I still see 172.20.0.10. nodelocaldns pods are running without crashing and the logs are not showing any errors:

2020/08/05 13:33:12 2020-08-05T13:33:12.734Z [INFO] Setting up networking for node cache

cluster.local.:53 on 169.254.20.10

in-addr.arpa.:53 on 169.254.20.10

ip6.arpa.:53 on 169.254.20.10

.:53 on 169.254.20.10

2020-08-05T13:33:12.762Z [INFO] CoreDNS-1.2.6

2020-08-05T13:33:12.762Z [INFO] linux/amd64, go1.11.10,

CoreDNS-1.2.6

linux/amd64, go1.11.10

I am running EKS 1.14 and using TF to control this cluster. I am using kubelet_extra_args = "--cluster-dns=169.254.20.10" in my worker_groups_launch_template_mixed.

Any advice would be appreciated.

Vlaaaaaaad commented 3 years ago

@aimanparvaiz hm... That's odd. Let's try to debug it.

Since the pod is still using 172.20.0.10, that means that the NodeLocalDNS "override" in the normal flow is not there. That makes me think that the kubelet is telling the pod to use 172.20.0.10 instead of 169.254.20.10.

  1. What version of NodeLocalDNS are you using, both container image + the version of yaml you applied?

    I know the master branch on k/k has a newer version. I haven't played with that(yet) and there may be differences in setup. The above instructions are for the yamls in the release-1.15 and release-1.16 branches( they're identical).

  2. Is that a new node? If you had a pre-existing EC2 and then ran a Terraform apply setting the kubelet_extra_args = "--cluster-dns=169.254.20.10", the settings may apply just to new EC2 instances --- but that depends a lot on how you manage your nodes.

  3. Is NodeLocalDNS bound on that host node? As per my blog post, if you run a netstat -lntp do you see a line with 169.254.20.10:53?

    kubectl run tmp-shell-host --generator=run-pod/v1 \
    --rm -it \
    --overrides='{"spec": {"hostNetwork": true}}' \
    --image nicolaka/netshoot -- /bin/bash
    
    # and then the expected output:
    
    netstat -lntp
    
      ...
      tcp   0   0   169.254.20.10:53   0.0.0.0:*   LISTEN   -
      ...
aimanparvaiz commented 3 years ago

@aimanparvaiz hm... That's odd. Let's try to debug it.

Since the pod is still using 172.20.0.10, that means that the NodeLocalDNS "override" in the normal flow is not there. That makes me think that the kubelet is telling the pod to use 172.20.0.10 instead of 169.254.20.10.

1. What version of NodeLocalDNS are you using, both container image + the version of yaml you applied?
   I know the `master` branch on `k/k` has a newer version. I haven't played with that(yet) and there may be differences in setup. The above instructions are for the `yaml`s in the `release-1.15` and `release-1.16` branches( they're identical).

2. Is that a new node? If you had a pre-existing EC2 and then ran a Terraform apply setting the `kubelet_extra_args = "--cluster-dns=169.254.20.10"`, the settings may apply just to new EC2 instances --- but that depends a lot on how you manage your nodes.

3. Is NodeLocalDNS bound on that host node? As per [my blog post](https://www.vladionescu.me/posts/eks-dns.html), if you run a `netstat -lntp` do you see a line with `169.254.20.10:53`?
kubectl run tmp-shell-host --generator=run-pod/v1 \
  --rm -it \
  --overrides='{"spec": {"hostNetwork": true}}' \
  --image nicolaka/netshoot -- /bin/bash

# and then the expected output:

netstat -lntp

    ...
    tcp   0   0   169.254.20.10:53   0.0.0.0:*   LISTEN   -
    ...

@Vlaaaaaaad thanks for responding. I am using image: k8s.gcr.io/k8s-dns-node-cache:1.15.3 and I got yaml from master branch. (this might be the issue)

This is a new node, I updated TF and manually removed the older nodes. I am using this to specify new node

kubectl run --overrides='{"apiVersion": "v1", "spec": {"nodeSelector": { "kubernetes.io/hostname": "ip-A-B-C-D.region.compute.internal" }}}' tmp-shell-no-host-net --generator=run-pod/v1 \
        --rm -it \
        --image nicolaka/netshoot -- /bin/bash

On this same node localdns is bound correctly. Used the same override flag to specify host. I do see: tcp 0 0 169.254.20.10:53 0.0.0.0:* LISTEN - on that node.

aimanparvaiz commented 3 years ago

@aimanparvaiz hm... That's odd. Let's try to debug it. Since the pod is still using 172.20.0.10, that means that the NodeLocalDNS "override" in the normal flow is not there. That makes me think that the kubelet is telling the pod to use 172.20.0.10 instead of 169.254.20.10.

1. What version of NodeLocalDNS are you using, both container image + the version of yaml you applied?
   I know the `master` branch on `k/k` has a newer version. I haven't played with that(yet) and there may be differences in setup. The above instructions are for the `yaml`s in the `release-1.15` and `release-1.16` branches( they're identical).

2. Is that a new node? If you had a pre-existing EC2 and then ran a Terraform apply setting the `kubelet_extra_args = "--cluster-dns=169.254.20.10"`, the settings may apply just to new EC2 instances --- but that depends a lot on how you manage your nodes.

3. Is NodeLocalDNS bound on that host node? As per [my blog post](https://www.vladionescu.me/posts/eks-dns.html), if you run a `netstat -lntp` do you see a line with `169.254.20.10:53`?
kubectl run tmp-shell-host --generator=run-pod/v1 \
  --rm -it \
  --overrides='{"spec": {"hostNetwork": true}}' \
  --image nicolaka/netshoot -- /bin/bash

# and then the expected output:

netstat -lntp

    ...
    tcp   0   0   169.254.20.10:53   0.0.0.0:*   LISTEN   -
    ...

@Vlaaaaaaad thanks for responding. I am using image: k8s.gcr.io/k8s-dns-node-cache:1.15.3 and I got yaml from master branch. (this might be the issue)

This is a new node, I updated TF and manually removed the older nodes. I am using this to specify new node

kubectl run --overrides='{"apiVersion": "v1", "spec": {"nodeSelector": { "kubernetes.io/hostname": "ip-A-B-C-D.region.compute.internal" }}}' tmp-shell-no-host-net --generator=run-pod/v1 \
        --rm -it \
        --image nicolaka/netshoot -- /bin/bash

On this same node localdns is bound correctly. Used the same override flag to specify host. I do see: tcp 0 0 169.254.20.10:53 0.0.0.0:* LISTEN - on that node.

I grabbed yaml from release-1.15, unless I need to refresh nodes again, I am still seeing the same behavior.

Vlaaaaaaad commented 3 years ago

@aimanparvaiz did you find the root cause after all? I remember this moving to Slack, but no conclusion. Maybe your solution will help other people too 🙂

aimanparvaiz commented 3 years ago

@aimanparvaiz did you find the root cause after all? I remember this moving to Slack, but no conclusion. Maybe your solution will help other people too 🙂

@Vlaaaaaaad I am not sure if I can safely say that I found the root cause. I deployed the latest version of Nodelocal DNS Cache, swapped out eks nodes with newer ones and the errors stopped. Thanks for all your help along with Chance Zibolski. Here is the link to complete slack conversation if anyone is interested: https://kubernetes.slack.com/archives/C8SH2GSL9/p1596646078276000.

dorongutman commented 3 years ago

I'm using EKS's kubernetes 1.17, and I don't quite understand whether I can use the nodelocaldns yaml file from the master branch, or do I have to take the one from the release-1.17 branch. this is the diff between the two:

$ diff nodelocaldns-1.17.yaml nodelocaldns-master.yaml 
100,102c100
<         forward . __PILLAR__UPSTREAM__SERVERS__ {
<                 force_tcp
<         }
---
>         forward . __PILLAR__UPSTREAM__SERVERS__
124,125c122,126
<        labels:
<           k8s-app: node-local-dns
---
>       labels:
>         k8s-app: node-local-dns
>       annotations:
>         prometheus.io/port: "9253"
>         prometheus.io/scrape: "true"
133a135,138
>       - effect: "NoExecute"
>         operator: "Exists"
>       - effect: "NoSchedule"
>         operator: "Exists"
136c141
<         image: k8s.gcr.io/k8s-dns-node-cache:1.15.7
---
>         image: k8s.gcr.io/dns/k8s-dns-node-cache:1.15.14
dorongutman commented 3 years ago

@Vlaaaaaaad any chance you know this ^^ ?

Vlaaaaaaad commented 3 years ago

Hey @dorongutman! Apologies, I am rather busy with some personal projects and I forgot to answer this 😞

My blog post is in desperate need of an update, and right now I lack the bandwidth for that. I hope I'll get to it before the end of the year, but we'll see.

Hm... based on the updated NodeLocalDNS docs there are only a couple of variables that need changing. The other variables are replaced by NodeLocalDNS when it starts. Not at all confusing 😄 The ones that need changing seem to be the same ones as in my first comment on this issue:

  • __PILLAR__DNS__DOMAIN__ with cluster.local
  • __PILLAR__DNS__SERVER__ with 10.100.0.10 or 172.20.0.10 AKA the output of kubectl -n kube-system get service kube-dns
  • __PILLAR__LOCAL__DNS__ with 169.254.20.10

There also seems to be no need to set --cluster-dns anymore as NodeLocalDNS discovers the address dynamically and changes the node DNS config.

As I said, I've got no bandwidth to actually test the latest NodeLocalDNS --- this comment is just a bunch of assumptions from my side. If any of y'all has the time to test it and blog about it, I can help review!

chuong-dao commented 3 years ago

So I got NodeLocalDNS working but I needed coredns to serve as a backup. Adding to eksctl config to get multiple nameserver entries in /etc/resolv.conf:

       kubeletExtraConfig:
          clusterDNS: ["169.254.20.10","10.100.0.10"]

169.254.20.10 is NodeLocalDNS 10.100.0.10 is coredns svc IP

That works but when I tested failover by spinning down NodeLocalDNS pods, nothing get resolved. I expected that it would lookup 10.100.0.10 but nothing is showing up.

cdobbyn commented 3 years ago

I hope this helps someone. I struggled with this for a while.

As long as you're using your own EC2 worker nodes you have access to modify the kubelet args (which is a requirement for this). I personally use terraform for this, but this pretty much just creates the following launch configuration user data (notice the --kubelet-extra-args)

#!/bin/bash -xe
/etc/eks/bootstrap.sh --b64-cluster-ca 'asdf' --apiserver-endpoint 'https://asdf.gr7.us-east-1.eks.amazonaws.com'  --kubelet-extra-args "--cluster-dns=169.254.20.10" 'my_cluster_name'

So as per above you're looking to add this to your kubelet so that ALL of your nodes will use this IP for DNS queries.

--cluster-dns=169.254.20.10

After that it's pretty cake, jam this configmap (below) into the yaml you can find here. https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: node-local-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  Corefile: |
  Corefile: |-
    cluster.local:53 {
        errors
        cache {
                success 9984 30
                denial 9984 5
        }
        reload
        loop
        bind 169.254.20.10 172.20.0.10
        forward . __PILLAR__CLUSTER__DNS__ {
                force_tcp
        }
        prometheus :9253
        health 169.254.20.10:8080
        }
    in-addr.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.20.10 172.20.0.10
        forward . __PILLAR__CLUSTER__DNS__ {
                force_tcp
        }
        prometheus :9253
        }
    ip6.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.20.10 172.20.0.10
        forward . __PILLAR__CLUSTER__DNS__ {
                force_tcp
        }
        prometheus :9253
        }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.20.10 172.20.0.10
        forward . __PILLAR__UPSTREAM__SERVERS__ {
                force_tcp
        }
        prometheus :9253
        }

If you are using managed nodes you are SOL. As far as I know it's not on their roadmap to implement this on managed nodes (although they really should).

asumitha-aws commented 3 years ago

My customer is interested in this feature. They Need to be able to set the DNS Config on the node for NodeLocal DNS. Extra-args not currently supported with the EKS optimized AMI.

mollerdaniel commented 3 years ago

My customer is interested in this feature. They Need to be able to set the DNS Config on the node for NodeLocal DNS. Extra-args not currently supported with the EKS optimized AMI.

--kubelet-extra-args '--cluster-dns=<IP>'

I have been running Nodelocal DNS Cache on EKS optimized AMI since 2019.

denniswebb commented 3 years ago

You no longer even need to do the --kubelet-extra-args '--cluster-dns=<IP>' as the latest version will "take over" the IP address of coredns by putting itself first in iptables so traffic gets routed to it before it hits the rules that would route it to coredns. This means you can install and start using it immediately without have to roll your nodes even.

Just follow the instructions here and replace __PILLAR__DNS__SERVER__, __PILLAR__LOCAL__DNS__, and __PILLAR__DNS__DOMAIN__ in the entire yaml file.

Or for the lazy just run: curl -s https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml | sed -e 's/__PILLAR__DNS__SERVER__/172.20.10/g;s/__PILLAR__LOCAL__DNS__/169.254.20.10/g;s/__PILLAR__DNS__DOMAIN__/cluster.local/g' | kubectl apply -f -

luisdavim commented 3 years ago

the following scrip should install node local dns:

#!/usr/bin/env bash

# Docs: https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/

version="master"

curl -sLO "https://raw.githubusercontent.com/kubernetes/kubernetes/${version}/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml"

kubedns=$(kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP})
domain="cluster.local"
localdns="169.254.20.10"

# If kube-proxy is running in IPTABLES mode:
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml

# If kube-proxy is running in IPVS mode:
# sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml

kubectl create -f nodelocaldns.yaml

but what if the cluster is already running coredns should we remove that can they co-exist?

omnibs commented 2 years ago

From the architecture diagram here I imagine they ~can~ need to co-exist, but I haven't tried setting this up yet. Just been hit by DNS slowness in k8s and just started doing my homework.

YuvalItzchakov commented 2 years ago

Question regarding the NodeLocal cache setup.

According to this blog post , when running a pod and running dig example.com, I should be seeing the node-local-cache pod tunnel the request through the defined local IP address: 169.254.20.10. I do not see this happening:

; <<>> DiG 9.16.1-Ubuntu <<>> example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49712
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;example.com.           IN  A

;; ANSWER SECTION:
example.com.        30  IN  A   93.184.216.34

;; Query time: 3 msec
;; SERVER: 10.100.0.10#53(10.100.0.10) <---- According to the blog, this should be 169.254.20.10
;; WHEN: Sun Nov 14 13:42:14 UTC 2021
;; MSG SIZE  rcvd: 67

However, when I open logging for the NodeLocal instance running for the node the pod is using, I do see the request going through NodeLocal:

node-local-dns-k8dqm node-cache [INFO] 172.26.XXX.XXX:42491 - 863 "A IN example.com. udp 52 false 4096" NOERROR qr,rd,ra 67 0.001107078s

Am I missing something?

zdraganov commented 2 years ago

Is there anything different when we run this in Calico setup, rather than AWS CNI?

cilindrox commented 2 years ago

@YuvalItzchakov according to this post, the newer versions of the app set an iptables output chain, you can verify via

kubectl -n kube-system exec -it \
  $(kubectl get po -n kube-system -l k8s-app=node-local-dns -o jsonpath='{.items[].metadata.name}') \
  -- iptables -L OUTPUT
adambro commented 1 year ago

I've done setup on EKS, but NodeLocal DNS pods do not resolve DNS queries, but cluster-wide CoreDNS does. I haven't replaced the nodes yet, just applied the config according to official docs and hoped the iptables magic would work.

That is similar to @YuvalItzchakov problem described few comments ago. After checking out the @cilindrox iptables output I see the correct binding. However when checked iptables directly on node it does not use 169.254.20.10 local IP at all:

[root@ip-10-0-51-88 ~]# iptables -vL OUTPUT
Chain OUTPUT (policy ACCEPT 71490 packets, 8200K bytes)
 pkts bytes target     prot opt in     out     source               destination
 5506  986K ACCEPT     udp  --  any    any     ip-172-20-0-10.eu-west-1.compute.internal  anywhere             udp spt:domain
    0     0 ACCEPT     tcp  --  any    any     ip-172-20-0-10.eu-west-1.compute.internal  anywhere             tcp spt:domain
    0     0 ACCEPT     udp  --  any    any     ip-10-0-51-88.eu-west-1.compute.internal  anywhere             udp spt:domain
    0     0 ACCEPT     tcp  --  any    any     ip-10-0-51-88.eu-west-1.compute.internal  anywhere             tcp spt:domain
 796K   49M KUBE-SERVICES  all  --  any    any     anywhere             anywhere             ctstate NEW /* kubernetes service portals */
7254K  960M KUBE-FIREWALL  all  --  any    any     anywhere             anywhere

Any idea why the iptables output would differ on K8s pod and on node? This seems to be the reason why the magic traffic routing to local IP for DNS resolution does not work.

otterley commented 1 year ago

Any idea why the iptables output would differ on K8s pod and on node?

iptables rules are scoped to the network namespace in which they are created. Nodes run in their own network namespace (unless you configure hostNetwork: true), so they do not inherit the iptables rules of the node itself.

adambro commented 1 year ago

It seems that my approach to observablity was wrong. I wanted to check if the DNS request was send to local IP or cluster IP, but apparently it listens on both on node:

sh-4.2$ netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 169.254.20.10:53        0.0.0.0:*               LISTEN
tcp        0      0 172.20.0.10:53          0.0.0.0:*               LISTEN

So even DNS requests sent to 172.20.0.10 IP are being resolved locally. That means my previous attempt to observability (looking at IP of the resolver) was flawed and we need different one.

tanvp112 commented 1 year ago

Hey @dorongutman! Apologies, I am rather busy with some personal projects and I forgot to answer this 😞

My blog post is in desperate need of an update, and right now I lack the bandwidth for that. I hope I'll get to it before the end of the year, but we'll see.

Hm... based on the updated NodeLocalDNS docs there are only a couple of variables that need changing. The other variables are replaced by NodeLocalDNS when it starts. Not at all confusing 😄 The ones that need changing seem to be the same ones as in my first comment on this issue:

  • __PILLAR__DNS__DOMAIN__ with cluster.local
  • __PILLAR__DNS__SERVER__ with 10.100.0.10 or 172.20.0.10 AKA the output of kubectl -n kube-system get service kube-dns
  • __PILLAR__LOCAL__DNS__ with 169.254.20.10

There also seems to be no need to set --cluster-dns anymore as NodeLocalDNS discovers the address dynamically and changes the node DNS config.

As I said, I've got no bandwidth to actually test the latest NodeLocalDNS --- this comment is just a bunch of assumptions from my side. If any of y'all has the time to test it and blog about it, I can help review!

@Vlaaaaaaad , I have followed your posts and successfully deploy the node local dns, thank you for your guide. As I am new to this, can I check with you despite that node local dns cached DNS records, but as soon as CoreDNS goes down (eg. scale to zero for example); query for any services in the cluster (eg. kubernetes.default.svc.cluster.local) will fail immediately. Is this something expected? Wonder why it fails instantly since it has cached record from kube-dns... Is there any setting for node local dns to response until record TTL expired?

mrparkers commented 1 year ago

@tanvp112 if you want this behavior, you can edit the Corefile within the ConfigMap for node-local-dns to include serve_stale within the cache plugin.

Docs for cache plugin: https://coredns.io/plugins/cache/

I verified that this works in my own installation. I think there are tradeoffs to this approach though. It may actually be better for DNS queries to fail instead of responding with potentially incorrect information.

tanvp112 commented 1 year ago

@mrparkers , thanks for guidance. Does this mean by default node local dns only cache external domain name and not the cluster domain (*.cluster.local)? The kubernetes document and GKE document suggest that CoreDNS/kube-dns will only be contacted when there's a cache missed... if true, query to node local dns such as kubernetes.default.svc.cluster.local should not fail instantly when CoreDNS/kube-dns is down.

edify42 commented 1 year ago

@tanvp112 I suspect if you're caching the result of cluster.local results, things like pod eviction and the resulting new pod IP won't be consistent in the cluster state if there's a cache keeping the old value.

tanvp112 commented 1 year ago

@edify42, TTL should help here, unless cluster.local is never cached... which is ambiguity in the documents above.

oguzhanaygn commented 1 year ago

@denniswebb I installed nodelocaldns cache to my EKS 1.27 cluster following the instructions tightly, but it didn't take over for some reason. My pods were still giving coredns service ip as nameserver. When I pass --kubelet-extra-args '--cluster-dns=<IP>' to my managed node group, everything seems perfectly fine now. What would be the reason for this?

Thank you very much.

isaac88 commented 10 months ago

Hello @denniswebb Are you using Kube-proxy on IPVS mode, if so you need to configure Kubelet to add --kubelet-extra-args '--cluster-dns=' More info: https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/#configuration

rotemad commented 3 months ago

I followed all the steps mentioned here and I'm getting a strange behavior; There are some external dns queries which NodelocalDNS adds the k8s cluster suffix:

[ERROR] plugin/errors: 2 app.ext-service.com.NAMESPACE.svc.cluster.local. AAAA: read tcp 172.20.0.10:34918->172.20.0.10:53: i/o timeout
[ERROR] plugin/errors: 2 app.ext-service.com.NAMESPACE.svc.cluster.local. AAAA: read tcp 172.20.0.10:44261->172.20.0.10:53: i/o timeout
[ERROR] plugin/errors: 2 app.ext-service.com.NAMESPACE.svc.cluster.local. A: read tcp 172.20.0.10:55295->172.20.0.10:53: i/o timeout
2024-03-12T10:15:17.735020027Z [ERROR] plugin/errors: 2 google.com.NAMESPACE.svc.cluster.local. A: read tcp 172.20.0.10:59988->172.20.0.10:53: i/o timeout
2024-03-12T10:15:17.735086710Z [ERROR] plugin/errors: 2 google.com.NAMESPACE.svc.cluster.local. A: read tcp 172.20.0.10:56905->172.20.0.10:53: i/o timeout
2024-03-12T10:15:17.735204342Z [ERROR] plugin/errors: 2 google.com.NAMESPACE.svc.cluster.local. AAAA: read tcp 172.20.0.10:59866->172.20.0.10:53: i/o timeout
2024-03-12T10:15:17.735209652Z [ERROR] plugin/errors: 2 google.com.NAMESPACE.svc.cluster.local. AAAA: read tcp 172.20.0.10:59820->172.20.0.10:53: i/o timeout

My NodelocalDNS configuration looks like most of you, double checked the replaced values (generated it using the docs sed command):

  Corefile: |
    cluster.local:53 {
        errors
        cache {
                success 9984 30
                denial 9984 5
        }
        reload
        loop
        bind 169.254.20.10 172.20.0.10
        forward . 172.20.0.10 {
                force_tcp
        }
        health 169.254.20.10:8080
        }
    in-addr.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.20.10 172.20.0.10
        forward . 172.20.0.10 {
                force_tcp
        }
        }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.20.10 172.20.0.10
        forward . /etc/resolv.conf
        }

Not sure if it's relevant, but, /etc/resolv.conf file contains options ndots:5.

Using EKS version 1.28 and k8s-dns-node-cache:1.22.28

gaffneyd4 commented 3 months ago

Those lookups come from pods in your cluster where the DNS options have not lowered ndots below 5.

gomgomshrimp commented 3 months ago

Hello All, I have deployed NodeLocal DNS into EKS following this issue and the Kubernetes documentation, but when I test, the query fails with timeout error.

Please help me resolve this issue. I'm using EKS 1.27.

Below is /etc/resolv.conf of test pod (dnsPolicy and dnsConfig are default settings.)

netshoot-7b56b754bb-cmkfl:~# cat /etc/resolv.conf 
search ns-play.svc.cluster.local svc.cluster.local cluster.local ap-northeast-2.compute.internal
nameserver 172.20.0.10
options ndots:5

Test with dig and nslookup command

netshoot-7b56b754bb-cmkfl:~# dig +short @169.254.20.10 www.com
45.33.2.79
173.255.194.134
45.79.19.196
45.33.30.197
72.14.185.43
45.33.20.235
96.126.123.244
72.14.178.174
198.58.118.167
45.33.18.44
45.56.79.23
45.33.23.183

netshoot-7b56b754bb-cmkfl:~# dig +short @172.20.0.10 example.com
93.184.216.34

netshoot-7b56b754bb-cmkfl:~# nslookup kubernetes.default
;; communications error to 172.20.0.10#53: timed out
;; communications error to 172.20.0.10#53: timed out
;; communications error to 172.20.0.10#53: timed out
;; no servers could be reached

netshoot-7b56b754bb-cmkfl:~# nslookup flaskapp.ns-play.svc.cluster.local
;; communications error to 172.20.0.10#53: timed out
;; communications error to 172.20.0.10#53: timed out
;; communications error to 172.20.0.10#53: timed out
;; no servers could be reached

Log of NodeLocal DNS pod at this time.

[INFO] 100.64.185.85:44370 - 40810 "A IN kubernetes.default.ns-play.svc.cluster.local. udp 62 false 512" - - 0 30.000202592s
[ERROR] plugin/errors: 2 kubernetes.default.ns-play.svc.cluster.local. A: dial tcp 172.20.188.230:53: i/o timeout
[INFO] 100.64.185.85:42713 - 40810 "A IN kubernetes.default.ns-play.svc.cluster.local. udp 62 false 512" - - 0 30.001063271s
[ERROR] plugin/errors: 2 kubernetes.default.ns-play.svc.cluster.local. A: dial tcp 172.20.188.230:53: i/o timeout
[INFO] 100.64.185.85:34459 - 40810 "A IN kubernetes.default.ns-play.svc.cluster.local. udp 62 false 512" - - 0 30.000200407s
[ERROR] plugin/errors: 2 kubernetes.default.ns-play.svc.cluster.local. A: dial tcp 172.20.188.230:53: i/o timeout

[INFO] 100.64.185.85:60041 - 36397 "A IN flaskapp.ns-play.svc.cluster.local.ns-play.svc.cluster.local. udp 78 false 512" - - 0 30.000218251s
[ERROR] plugin/errors: 2 flaskapp.ns-play.svc.cluster.local.ns-play.svc.cluster.local. A: dial tcp 172.20.188.230:53: i/o timeout
[INFO] 100.64.185.85:42720 - 36397 "A IN flaskapp.ns-play.svc.cluster.local.ns-play.svc.cluster.local. udp 78 false 512" - - 0 30.000205236s
[ERROR] plugin/errors: 2 flaskapp.ns-play.svc.cluster.local.ns-play.svc.cluster.local. A: dial tcp 172.20.188.230:53: i/o timeout
[INFO] 100.64.185.85:44301 - 36397 "A IN flaskapp.ns-play.svc.cluster.local.ns-play.svc.cluster.local. udp 78 false 512" - - 0 30.000199686s
[ERROR] plugin/errors: 2 flaskapp.ns-play.svc.cluster.local.ns-play.svc.cluster.local. A: dial tcp 172.20.188.230:53: i/o timeout

There are some timeout errors that diar tcp 172.20.188.230:53: i/o timeout. I found that 172.20.188.230 is ClusterIP for kube-dns-upstream. Based on the above information, it seems like the problem is the query upgraded to TCP makes timeout. This is just a guess. I'm not sure.

> kubectl get svc -n kube-system
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns-upstream   ClusterIP   172.20.188.230   <none>        53/UDP,53/TCP   58m

The configuration for node local dns is probably the same for all of you, but I'm attaching it.

Expand ``` Corefile: | cluster.local:53 { log errors cache { success 9984 30 denial 9984 5 } reload loop bind 169.254.20.10 172.20.0.10 forward . __PILLAR__CLUSTER__DNS__ { force_tcp } prometheus :9253 health 169.254.20.10:8080 } in-addr.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 172.20.0.10 forward . __PILLAR__CLUSTER__DNS__ { force_tcp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 172.20.0.10 forward . __PILLAR__CLUSTER__DNS__ { force_tcp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 169.254.20.10 172.20.0.10 forward . __PILLAR__UPSTREAM__SERVERS__ prometheus :9253 } ```
joadr commented 2 months ago

Is there a way to install node-local-dns on a managed eks nodegroup?