Open hazadam opened 6 years ago
Hm, I actually think that documentation is incorrect (well, at least partially). I'm also not sure why that worked for you previously in 18.06.
Note that on systems that have systemd-resolvd
enabled, /etc/resolv.conf
is no longer the "leading" configuration file, but another file is used instead (/run/systemd/resolve/resolv.conf
).
Docker 18.09 detects if systemd-resolvd
is running and, if so, uses the correct one (code for that was added in https://github.com/moby/moby/pull/37485/commits/e353e7e3f0ce8eceeff657393cba2876375403fa)
Trying to reproduce the /etc/hosts
issue, I tested this on Docker 18.06.1, 18.09.0, and even an old 17.06 docker, and all produce the same result;
Add an entry to /etc/hosts
echo "123.123.123.123 foo.bar" >> /etc/hosts
Ping that host from inside a container (on the default "bridge" network);
docker run --rm busybox sh -c 'ping -c1 foo.bar'
ping: bad address 'foo.bar'
And the same, using a custom network
docker network create bla
docker run --rm --network=bla busybox sh -c 'ping -c1 foo.bar'
ping: bad address 'foo.bar'
Adding a custom host to the container works;
docker run --rm --network=bla --add-host=foo.bar:123.123.123.123 busybox sh -c 'ping -c1 foo.bar'
PING foo.bar (123.123.123.123): 56 data bytes
--- foo.bar ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
When using the default (bridge) network;
resolv.conf
, resolv.conf
for the container--dns
is set) rewrites the fileresolv.conf
is "mounted" from the host to the container (so that it can be accessed from inside the container)When using "host" networking (--network=host
);
resolv.conf
and /etc/hosts
file (if present)resolv.conf
and /etc/hosts
files rom the host to the container (so that it can be accessed from inside the container)When using a custom bridge network;
resolv.conf
for the container, containing an entry for the embedded DNSresolv.conf
from the host to the container (so that it can be accessed from inside the container)The code to generate the initial /etc/hosts
file in the container is in libnetwork: https://github.com/docker/libnetwork/blob/d0ae17dcfaa1f21e3b0f5d55bba4239f08489640/etchosts/etchosts.go#L27-L112
And the code to generate the container's /etc/resolv.conf
file in the container https://github.com/docker/libnetwork/blob/d0ae17dcfaa1f21e3b0f5d55bba4239f08489640/resolvconf/resolvconf.go#L223-L254
Add a comment to the end of /etc/resolv.conf
on the host, and add a "search domain" (this will affect your networking, so this is just to illustrate :sweat_smile:)
echo "# Hello world" >> /etc/resolv.conf
echo "search localdomain.com" >> /etc/resolv.conf
Check its content;
cat /etc/resolv.conf
nameserver 2001:4860:4860::8844
nameserver 2001:4860:4860::8888
nameserver 8.8.8.8
# Hello world
search localdomain.com
Now, start a container on the default (bridge) network, and check the /etc/resolv.conf
inside the container.
Notice that the IPv4 nameserver, and the search domain are copied from the host. Comments are also left in place:
docker run --rm busybox sh -c 'cat /etc/resolv.conf'
nameserver 8.8.8.8
# Hello world
search localdomain.com
Now, start a container with a custom DNS set (also on the default "bridge" network);
docker run --rm --dns=1.1.1.1 busybox sh -c 'cat /etc/resolv.conf'
search localdomain.com
nameserver 1.1.1.1
This time a fresh The custom DNS is written to the container's copy of resolv.conf
, but the search domain is kept intact.
In host networking, the container runs in the host's networking namespace, so will also use the same networking configuration as the host (hence the host's resolv.conf
and /etc/hosts
being mounted inside the container);
docker run --rm --network=host busybox sh -c 'cat /etc/resolv.conf'
nameserver 2001:4860:4860::8844
nameserver 2001:4860:4860::8888
nameserver 8.8.8.8
# Hello world
search localdomain.com
And the /etc/hosts
is also the same as on the host;
docker run --rm --network=host busybox sh -c 'cat /etc/hosts'
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
123.123.123.123 foo.bar
So, looks like that part of the docs need some updating;
systemd-resolvd
is active, the /run/systemd/resolve/resolv.conf
file is used instead of /etc/resolv.conf
/etc/hosts
should be removed there; I don't think it's used, other than when running with --network=host
(but perhaps I'm overlooking something)--dns
is set on docker run
, or configured in the daemon defaults), docker will copy the host's DNS settings to the container when it's createdping @fcrisciani anything I overlooked here?
@thaJeztah looks good, also if the resolv.conf for some reason cannot be used, like it points to a localhost IP like nameserver 127.0.0.1
the tasks will get a default set of dns that are the google ones: 8.8.8.8
and 8.8.4.4.
I think this might have something to do with whether Ubuntu was installed as 18.04, or whether it was upgraded from 16.04. I came across this same issue when upgrading my Kubernetes hosts from Ubuntu 16.04 and Docker 17.03, to Ubuntu 18.04 and Docker 18.09. The containers all have the same problem where their /etc/resolv.conf points to 8.8.8.8 instead of inheriting. When I downgraded them to Docker 18.06, they work. On the host, I noticed that /run/systemd/resolve/resolv.conf
does not actually contain any resolvers.
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
# No DNS servers known.
I also have a separate VM that was installed fresh as Ubuntu 18.04, and that one works fine with Docker 18.09. I also see that the /run/systemd/resolve/resolv.conf
on that fresh host does have a DNS entry, and the containers running on that host correctly get the same DNS entry.
Can someone from Docker confirm this is an issue? We need to upgrade our Docker hosts to 18.09.2 for the latest runc CVE, but we cannot upgrade without breaking all of our containers.
@shubb30 if you're currently on 18.06, you could still update to 18.06.2 (which has the fix for the CVE). I'm not sure why the /run/systemd/resolve/resolv.conf
would be empty in the "upgrade" scenario, but that looks like a potential bug in either Ubuntu or systemd-resolved (as I think that should contain the canonical DNS server configuration if resolved is used)
I recently ran into this issue after upgrading our ubuntu 16.04 server to 18.04 and docker to 18.09.
I looked through the man page of systemd-resolved.service
/ETC/RESOLV.CONF
Four modes of handling /etc/resolv.conf (see resolv.conf(5)) are supported:
· systemd-resolved maintains the /run/systemd/resolve/stub-resolv.conf file for compatibility with traditional Linux programs. This file may be symlinked from /etc/resolv.conf. This file lists the 127.0.0.53 DNS stub (see
above) as the only DNS server. It also contains a list of search domains that are in use by systemd-resolved. The list of search domains is always kept up-to-date. Note that /run/systemd/resolve/stub-resolv.conf should not
be used directly by applications, but only through a symlink from /etc/resolv.conf. This file may be symlinked from /etc/resolv.conf in order to connect all local clients that bypass local DNS APIs to systemd-resolved with
correct search domains settings. This mode of operation is recommended.
· A static file /usr/lib/systemd/resolv.conf is provided that lists the 127.0.0.53 DNS stub (see above) as only DNS server. This file may be symlinked from /etc/resolv.conf in order to connect all local clients that bypass
local DNS APIs to systemd-resolved. This file does not contain any search domains.
· systemd-resolved maintains the /run/systemd/resolve/resolv.conf file for compatibility with traditional Linux programs. This file may be symlinked from /etc/resolv.conf and is always kept up-to-date, containing information
about all known DNS servers. Note the file format's limitations: it does not know a concept of per-interface DNS servers and hence only contains system-wide DNS server definitions. Note that
/run/systemd/resolve/resolv.conf should not be used directly by applications, but only through a symlink from /etc/resolv.conf. If this mode of operation is used local clients that bypass any local DNS API will also bypass
systemd-resolved and will talk directly to the known DNS servers.
· Alternatively, /etc/resolv.conf may be managed by other packages, in which case systemd-resolved will read it for DNS configuration data. In this mode of operation systemd-resolved is consumer rather than provider of this
configuration file.
Note that the selected mode of operation for this file is detected fully automatically, depending on whether /etc/resolv.conf is a symlink to /run/systemd/resolve/resolv.conf or lists 127.0.0.53 as DNS server.
originally the dns is set in /etc/resolve.conf
, after upgrading to ubuntu 18.04 the server retained this setting, so it seems systemd-resolved.service
is running on mode #4, just acting as a client.
The proper way to configure systemd-resolved.service
seems to be modifying /etc/systemd/resolved.conf
, I added the settings in the /etc/resolv.conf
to DNS
and Domain
sections of /etc/systemd/resolved.conf
, removed /etc/resolv.conf
, and symlinked it to /run/systemd/resolve/resolv.conf
, restarted systemd-resolved.service
and docker.service
. That seemed to resolved the issue.
I think while reading /run/systemd/resolve/resolv.conf
is ok in one scenario, it doesn't really cover all modes of systemd-resolved.service
, maybe using the output of systemd-resolve --status
is more reliable.
This is not good. The least I can confirm is the behavior did change between 17.09.0 and 18.09.1, as the exact same docker-compose configuration works on one and not the other. It seems like the embedded DNS server isn't properly forwarding the requests to the host on the newer version... (EDIT: What makes me think this, is that the resolv.conf in both containers is exactly the same, but one can resolve hostnames and the other cannot)
I'd really like a fix that doesn't involve setting the nameserver address manually anywhere since the server acquires it by DHCP (and therefore the address is susceptible to change).
EDIT: Problem seems to be solved by adding network_mode: 'bridge'
to docker-compose.yml
. See this issue for more information...
@thaJeztah @fcrisciani
Note that on systems that have systemd-resolvd enabled, /etc/resolv.conf is no longer the "leading" configuration file, but another file is used instead (/run/systemd/resolve/resolv.conf).
I'm very dubious as to whether this is the right thing to be doing in docker. The /run/systemd/resolve/resolv.conf
file usually contains the following comment at the top:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
The important part here is "Third party programs must not access this file directly, but only through the symlink at /etc/resolv.conf."
If my understanding is correct, /etc/resolv.conf
is the defacto source for local dns resolution settings (even on ubuntu 18.04+), while /run/systemd/resolve/resolv.conf
is simply an autogenerated file from systemd-resolvd that may be symlinked to. I emphasize "may" because some third party applications such VPN may choose to ignore systemd's resolv.conf and instead replace /etc/resolv.conf
with their own version (after backing it up, of course). This scenario is problematic if a docker container needs to access resources on the VPN but can't because it inherited from the wrong resolv.conf
file. This is a problem I personally ran into.
In short, it seems to me that the docker container should always inherit from /etc/resolv.conf
, and the user should be responsible for choosing if /etc/resolv.conf
points to /run/systemd/resolve/resolv.conf
or contains a custom configuration.
Thoughts?
I totally agree, especially in light of that comment inside the file.
Le mar. 1 oct. 2019 à 21:27, Clayton Lemons notifications@github.com a écrit :
@thaJeztah https://github.com/thaJeztah @fcrisciani https://github.com/fcrisciani
Note that on systems that have systemd-resolvd enabled, /etc/resolv.conf is no longer the "leading" configuration file, but another file is used instead (/run/systemd/resolve/resolv.conf).
I'm very dubious as to whether this is the right thing to be doing in docker. The "/run/systemd/resolve/resolv.conf" file usually contains the following comment at the top:
This file is managed by man:systemd-resolved(8). Do not edit.## This is a dynamic resolv.conf file for connecting local clients directly to# all known uplink DNS servers. This file lists all configured search domains.## Third party programs must not access this file directly, but only through the# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,# replace this symlink by a static file or a different symlink.## See man:systemd-resolved.service(8) for details about the supported modes of# operation for /etc/resolv.conf.
The important part here is "Third party programs must not access this file directly, but only through the symlink at /etc/resolv.conf."
If my understanding is correct, /etc/resolv.conf is the defacto source for local dns resolution settings (even on ubuntu 18.04+), while /run/systemd/resolve/resolv.conf is simply an autogenerated file from systemd-resolvd that may be symlinked to. I emphasize "may" because some third party applications such VPN may choose to ignore systemd's resolv.conf and instead replace /etc/resolv.conf with their own version (after backing it up, of course). This scenario is problematic if a docker container needs to access resources on the VPN but can't because it inherited from the wrong resolv.conf file. This is a problem I personally ran into.
In short, it seems to me that the docker container should always inherit from /etc/resolv.conf, and the user should be responsible for choosing if /etc/resolv.conf points to /run/systemd/resolve/resolv.conf or contains a custom configuration.
Thoughts?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/docker/for-linux/issues/488?email_source=notifications&email_token=AAANLNULQR6IKWK3FVXE6RDQMOQB3A5CNFSM4GDWJYRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEACOILQ#issuecomment-537191470, or mute the thread https://github.com/notifications/unsubscribe-auth/AAANLNWVJ4RLA5FBH7JDBSTQMOQB3ANCNFSM4GDWJYRA .
@thaJeztah @fcrisciani
Note that on systems that have systemd-resolvd enabled, /etc/resolv.conf is no longer the "leading" configuration file, but another file is used instead (/run/systemd/resolve/resolv.conf).
I'm very dubious as to whether this is the right thing to be doing in docker. The
/run/systemd/resolve/resolv.conf
file usually contains the following comment at the top:# This file is managed by man:systemd-resolved(8). Do not edit. # # This is a dynamic resolv.conf file for connecting local clients directly to # all known uplink DNS servers. This file lists all configured search domains. # # Third party programs must not access this file directly, but only through the # symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way, # replace this symlink by a static file or a different symlink. # # See man:systemd-resolved.service(8) for details about the supported modes of # operation for /etc/resolv.conf.
The important part here is "Third party programs must not access this file directly, but only through the symlink at /etc/resolv.conf."
If my understanding is correct,
/etc/resolv.conf
is the defacto source for local dns resolution settings (even on ubuntu 18.04+), while/run/systemd/resolve/resolv.conf
is simply an autogenerated file from systemd-resolvd that may be symlinked to. I emphasize "may" because some third party applications such VPN may choose to ignore systemd's resolv.conf and instead replace/etc/resolv.conf
with their own version (after backing it up, of course). This scenario is problematic if a docker container needs to access resources on the VPN but can't because it inherited from the wrongresolv.conf
file. This is a problem I personally ran into.In short, it seems to me that the docker container should always inherit from
/etc/resolv.conf
, and the user should be responsible for choosing if/etc/resolv.conf
points to/run/systemd/resolve/resolv.conf
or contains a custom configuration.Thoughts?
I second that - even if systemd is running the important quote is:
Third party programs must not access this file directly, but only through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way, replace this symlink by a static file or a different symlink.
My VPN connection program is replacing that /etc/resolv.conf file and still docker takes the wrong obsolete dns info from the /run/.../resolv.conf file and not the updated /etc/resolv.conf one - DNS obviously does not work anymore inside the container because of that.
I confirm that DNS name is not working on docker-compose.
/run/systemd/resolve/resolv.conf
is the wrong file though isn't it? it should use /run/systemd/resolve/stub-resolv.conf
to get the in-docker behaviour to match the host.... especially when involving VPN and needs for per-interface DNS Servers this gets really bad.
However... yeah.. nevermind. Since the systemd-resolved interface is listening on the loopback interface, that's no good! I really would like to get this working sanely, especially painful on a roadrunner laptop where you move around between nets and vpns alot...
This works great... forwarding DNS from docker0 => 127.0.0.53 ...
sudo socat -v TCP-LISTEN:53,fork,reuseaddr,bind=172.17.0.1 TCP:127.0.0.53:53
sudo socat -v UDP-LISTEN:53,fork,reuseaddr,bind=172.17.0.1 UDP:127.0.0.53:53
docker run --rm -it --dns 172.17.0.1 ...
Now the remaining part is influencing all pieces of software that runs docker containers to pass that --dns ...
flag to docker.
Expected behavior
Works in Docker version 18.06.1-ce, build e68fc7a
/etc/hosts file contains some domains for development. When I ping such domain inside container (inside user defined network) I get the right IP address.
Docs says that container inherits DNS settings from Docker daemon inlcuding /etc/hosts and /etc/resolv.conf. (https://docs.docker.com/config/containers/container-networking/#dns-services)
I don't know exactly how that works but when I inspect the network, this is what happens when pinging domain from /etc/hosts:
Check the image above, you can see that something originating from 127.0.0.1 asks my local DNS resolver at 127.0.0.53, it returns back and then the container seems to already know the address.
Actual behavior
In updated version of docker the DNS query goes from container straight to DNS server outside my network. I know I can override this with --dns option and I do that now because I have no choice. But it was really convenient to just set up /etc/hosts real quick and go.
Steps to reproduce the behavior
Update docker to version 18.09, add record to /etc/hosts locally, ping that domain from container.
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.) Firewall is up, extra rules are: 22/tcp ALLOW Anywhere
Xdebug ALLOW Anywhere