Open joesteffee opened 1 year ago
Thanks for the request @joesteffee! We'll look into it.
The man page is telling me the default value is 1 - does this value not work well in your setup? (You mentioned 5)
Thanks for the request @joesteffee! We'll look into it.
The man page is telling me the default value is 1 - does this value not work well in your setup? (You mentioned 5)
AWS EKS defaults this to 5. The default in linux is 1, the default recommended for coredns is 3.
@joesteffee out of curiosity - where does EKS set this?
Starting with aws-k8s-1.28
, the distro as a whole has started moving away from wicked towards systemd-networkd/resolved. That means for the short term we are supporting both, but need to be cognizant about taking on new settings that explicitly work for systemd-resolved in the longer term.
I had some time to research this a little bit and came across this systemd issue, and more specifically this comment where @poettering explains the rationale behind systemd-resolved not supporting ndots-like functionality.
I haven't had time to go digging, but I'd be curious what other distros that use systemd-resolved do for this particular setting.
Regardless of what resolver the host distro uses, kubelet will pass a modified resolv.conf
into containers that use the overlay network. If they're using glibc they will understand and respect the ndots option.
As I understand it, this request is to be able to configure kubelet's behavior.
Regardless of what resolver the host distro uses, kubelet will pass a modified
resolv.conf
into containers that use the overlay network. If they're using glibc they will understand and respect the ndots option.As I understand it, this request is to be able to configure kubelet's behavior.
I'm not 100% sure where the configuration is coming from initially (VPC DHCP maybe?) but if you look at /etc/resolv.conf on any EKS node using bottlerocket it has ndots:5 set by default. Its my understanding that the /etc/resolv.conf on the host is inherited when it is injected into pod overlay networks, as modifying the /etc/resolv.conf ndots setting on the host causes the new setting to be present in containers.
We have worked around this issue for now by setting up node-local DNS caching: https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/ So, we have a caching DNS server (local-dns) sitting in front of our caching DNS servers (coredns), which then sits in front of our VPC-level caching DNS servers provided by AWS. All because every local domain lookup was driving us over rate limits set by AWS due to all the NXDOMAIN responses that should never have been made to begin with.
Here's a clearer example of what is happening: lookup: github.com causes lookups on:
As you can see, every external domain uses at least 3x as many DNS requests (or more if additional search domains are used) to resolve as is necessary, impacting performance and potentially hitting limits imposed by upstream DNS servers. A more desireable behavior is to try the local resolvers last, as controlled by the ndots setting.
It looks like many people end up using https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config to configure pods to have dnsConfig set with lower ndots if they aren't using .
terminated queries. Something like:
spec:
dnsConfig
options:
- name: ndots
value: "2"
- name: edns0
This works for each pod but doesn't set it for the entire node. I haven't found anything that let's one set ndots
for the node and it seems most use dnsConfig
to change this on specific pods. I think most setups configure CoreDNS at the cluster level.
Its my understanding that the /etc/resolv.conf on the host is inherited when it is injected into pod overlay networks, as modifying the /etc/resolv.conf ndots setting on the host causes the new setting to be present in containers.
I don't believe this is how it works on Bottlerocket since ndots
isn't set by default on the node, so this appears to be coming in from the cluster or kubelet. I haven't been able to pinpoint this so far but wanted to share what I've found so far.
I do think it would be useful for users to specify these settings since it looks like you can specify a /etc/resolv.conf
file explicitly to kubelet with the --resolv-conf
flag: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/. It seems useful to be able to provide a way to override this but I don't believe you can do this today in Bottlerocket.
I did notice that systemd-resolved
is called out as a known issue here: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues so we might need to see if that logic is kicking in and causing additional difficulties.
I have also been trying to adjust the ndots setting in the /etc/resolv.conf that the kubelet injects into the containers.
I found the source of the ndots 5 setting in the kubelet code here which is then used here. Unfortunately, the ndots value is hard coded and there isn't a way to override that default value to something lower. It does seem like if we could set the --resolv-conf
flag then we could override the default resolv.conf that the kubelet is generating, but it doesn't seem like Bottlerocket supports that. Is there another option for this that is supported?
Thanks for the links to the code @ClareCat! I did some looking and agree that providing --resolv-conf
is one way to do this but is deprecated: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/. I don't actually see a way to do this via kubelet configuration files. From reading the code, I think the providing an empty string would achieve this result but it may stop working in the future if it is currently deprecated. I'll aim to follow up with something more targeted to recommend but it does seem that choosing different DNS configuration for your pods could work too https://github.com/kubernetes/kubernetes/blob/release-1.31/pkg/kubelet/network/dns/dns.go#L440 but I'd have to do more reading of the code to figure out how you might do this.
What I'd like: The ability to set ndots in /etc/resolv.conf. Similar to how nameservers and search-list is set
The default ndots value of 5 is not useful in the majority of aws k8s deployments.
Any alternatives you've considered: Custom admissions controller to set ndots on every pod in the cluster (why not just do it at the host level? This is far too complicated) Custom userdata script to run echo "ndots: 5" >> /etc/resolv.conf (custom bash userdata not supported?)