moby / moby

The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
https://mobyproject.org/
Apache License 2.0
68.47k stars 18.63k forks source link

Can't access internet from containers #13381

Closed cfpeng closed 7 years ago

cfpeng commented 9 years ago

When I ping google.com in the container, it return : ping: unknown host

[HOST Info] root@host# uname -a Linux localhost 4.0.2-x86_64-linode56 #1 SMP Mon May 11 16:55:19 EDT 2015 x86_64 GNU/Linux

root@host# docker version Client version: 1.6.2 Client API version: 1.18 Go version (client): go1.4.2 Git commit (client): 7c8fca2 OS/Arch (client): linux/amd64 Server version: 1.6.2 Server API version: 1.18 Go version (server): go1.4.2 Git commit (server): 7c8fca2 OS/Arch (server): linux/amd64

start the container root@host#docker run --rm -it debian /bin/bash

start capture package root@host# tshark -i eth0 -i docker0 1 0.000000 106.186.. -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com 2 0.046688 8.8.8.8 -> 106.186.. DNS 86 Standard query response 0xb49a A 216.58.221.14 3 -0.000042 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com 4 4.171017 fe80::1 -> ff02::1 ICMPv6 118 Router Advertisement from 00:05:73:a0:0f:ff 5 5.005167 106.186.. -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com 6 5.007502 8.8.8.8 -> 106.186.. DNS 86 Standard query response 0xb49a A 216.58.221.14 7 5.005127 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com 8 5.016512 02:42:ac:11:00:04 -> ca:5b:7d:34:78:20 ARP 42 Who has 172.17.42.1? Tell 172.17.0.4 9 5.016542 ca:5b:7d:34:78:20 -> 02:42:ac:11:00:04 ARP 42 172.17.42.1 is at ca:5b:7d:34:78:20 10 10.010414 106.186.. -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com 11 10.046683 8.8.8.8 -> 106.186.. DNS 86 Standard query response 0x1367 A 216.58.221.14 12 10.010374 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com 13 15.015578 106.186.. -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com 14 15.052782 8.8.8.8 -> 106.186.. DNS 246 Standard query response 0x1367 A 173.194.126.198 A 173.194.126.196 A 173.194.126.197 A 173.194.126.194 A 173.194.126.195 A 173.194.126.193 A 173.194.126.206 A 173.194.126.199 A 173.194.126.200 A 173.194.126.192 A 173.194.126.201 15 15.015538 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com

root@f82d47432161:/# ip addr eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff inet 172.17.0.4/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:4/64 scope link valid_lft forever preferred_lft forever

root@f82d47432161:/# ping google.com ping: unknown host

It seems that the host did not forword package to the container

runcom commented 9 years ago

@VirtualSniper do you have net.ipv4.ip_forward on?

cfpeng commented 9 years ago

yes

aboch commented 9 years ago

@VirtualSniper are you able to reproduce this ? I tried with no luck. If you can reproduce, can you please capture what is going on on the vethxxx interface. Thanks.

cfpeng commented 9 years ago

@aboch yes, here is : root@host# tshark -i vetheee49f3 1 0.000000 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x4b9f A google.com 2 5.005164 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x4b9f A google.com 3 5.007449 02:42:ac:11:00:03 -> 52:06:ed:10:28:2a ARP 42 Who has 172.17.42.1? Tell 172.17.0.3 4 5.007462 52:06:ed:10:28:2a -> 02:42:ac:11:00:03 ARP 42 172.17.42.1 is at 52:06:ed:10:28:2a 5 10.010424 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x8327 A google.com 6 15.015621 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x8327 A google.com

clnperez commented 9 years ago

I was also running into this. I was able to work around it by deactivating & deleting docker0 (after stopping docker), and then starting docker again (which re-creates docker0).

I do have NetworkManager running, since this is on my laptop. and a VPN running at the moment. But this was also happening while I was in the office on Friday (sans VPN in use).

I wouldn't be surprised is this is a known issue, and there are other open issues for it. Anyone know who to tag from the Docker networking side to find out more?

bear0330 commented 9 years ago

I have the same problem but it not always happens. First, everything works fine, then after few days passed, or I build some images, start/stop containers. Sometimes inside the container cannot connect to anything anymore. All my running containers are the same, lost internet connection, for example curl to github (via ip or domain) will fail:

[root@2b7308d /]# curl http://192.30.252.129
curl: (7) Failed connect to 192.30.252.129:80; No route to host

The only way I can solve this is restart docker daemon, then everything will be back to work. But it bothers me a lot, all my apps and services in container will be down but I even don't know it until I got error to do something.

Any suggestions for this? Thanks.

thaJeztah commented 9 years ago

ping @aboch

aboch commented 9 years ago

There was a bug in bridge driver code where the linux bridge interface MAC address would not be programmed as "SET" in 4.x (x < 3) kernels. Bug is there in docker 1.6.2.

@cfpeng I see your host is running a 4.0.2, so it would be affected. Issue has been recently fixed here and made it to docker/docker code via this so it will be in docker 1.8.0

@bear0330 can you check whether your kernel is a 4.x as well ? This would explain why you hit the issue after a while, maybe after spawning a new container.

@cfpeng @bear0330 could you please check whether you are still hitting this issue with latest 1.8.0-rcX image

bear0330 commented 9 years ago

@aboch My kernel is 3.10.0-229.7.2.el7.x86_64, I am running docker on Azure, I am not sure this is because Azure's issue (I have no idea), I am trying to run docker on Vultr.

dverbeek84 commented 9 years ago

@aboch same issue here with the same kernel as @bear0330

fkeet commented 9 years ago

Similar issue. Removing the bridge (and relevant cleanup) did not have an effect.

$ uname -a Linux hostname 3.19.0-25-generic #26-Ubuntu SMP Fri Jul 24 21:17:31 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ docker version Client: Version: 1.8.1 API version: 1.20 Go version: go1.4.2 Git commit: d12ea79 Built: Thu Aug 13 02:40:42 UTC 2015 OS/Arch: linux/amd64

Server: Version: 1.8.1 API version: 1.20 Go version: go1.4.2 Git commit: d12ea79 Built: Thu Aug 13 02:40:42 UTC 2015 OS/Arch: linux/amd64

$ docker run -ti ubuntu /bin/bash $ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=0.967 ms ....

$ ping www.google.com ping: unknown host www.google.com

aboch commented 9 years ago

@bear0330 Your issue (from what I can see from your logs) is different from the one hit by @cfpeng and @fkeet. Theirs is related to DNS response packets not being delivered to the container, yours is related to ip reachability: you would get "no route to host" if ,for example, in your container the default gw IP is unset or it does not belong to the same network of eth0.

aboch commented 9 years ago

@cfpeng Given DNS requests are routed from docker0 to eth0 but responses are not, it makes me think it has to do with iptables. If you have not done so already, could you please run the check-config.sh that you find in docker/contrib/ to see if any required iptable component is missing.

@fkeet Can you also try the same.

Thanks.

bear0330 commented 9 years ago

@aboch I got the same issue after running docker on Vultr's machine for few days. Now my container cannot connect to internet again. Now in container (My container's hostname is status.xxx.com):

[root@status /]# curl http://www.google.com/
curl: (6) Could not resolve host: www.google.com; Unknown error
[root@status /]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

(hang, I press Ctrl+C to break)

[root@status /]# curl http://192.30.252.129
curl: (7) Failed connect to 192.30.252.129:80; No route to host

run docker run -ti ubuntu /bin/bash on host:

[root@mercury Redis]# docker run -ti ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
Trying to pull repository docker.xxx.com/ubuntu ... failed
Trying to pull repository docker-protected.xxx.com/ubuntu ... failed
latest: Pulling from docker.io/ubuntu
6071b4945dcf: Pulling fs layer
6071b4945dcf: Download complete
5bff21ba5409: Download complete
e5855facec0b: Download complete
8251da35e7a7: Download complete
Status: Downloaded newer image for docker.io/ubuntu:latest
root@63840a13cad5:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

(hang..., also Ctrl+C)

[root@mercury Redis]# docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): ba1f6c3/1.6.2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): ba1f6c3/1.6.2
OS/Arch (server): linux/amd64
[root@mercury Redis]# docker info
Containers: 18
Images: 142
Storage Driver: devicemapper
 Pool Name: docker-253:1-304602-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file:
 Metadata file:
 Data Space Used: 3.558 GB
 Data Space Total: 107.4 GB
 Data Space Available: 103.8 GB
 Metadata Space Used: 7.365 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.14 GB
 Udev Sync Supported: true
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Kernel Version: 3.10.0-229.11.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 2
Total Memory: 1.797 GiB
Name: mercury.xxx.com
ID: V5PO:Z7CC:LTPM:NICT:I2G5:6B6K:AVTP:IOH5:6GNW:JLOK:VCUF:MSY

There is some log messages in my /var/log/messages, I don't know it is related or not:

Aug 15 11:20:02 mercury systemd: Starting Session 5445 of user root.
Aug 15 11:20:02 mercury systemd: Started Session 5445 of user root.
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="Container 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534 failed to exit within 10 seconds of SIGTERM - using the force"
Aug 15 11:20:09 mercury kernel: docker0: port 1(veth6388a9a) entered disabled state
Aug 15 11:20:09 mercury kernel: device veth6388a9a left promiscuous mode
Aug 15 11:20:09 mercury kernel: docker0: port 1(veth6388a9a) entered disabled state
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job log(die, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job log(die, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest) = OK (0)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job release_interface(6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534)"
....
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job container_inspect(redis-2.8.19)"
Aug 15 11:20:09 mercury NetworkManager[558]: <info>  (veth6388a9a): device state change: activated -> unmanaged (reason 'removed') [100 10 36]
Aug 15 11:20:09 mercury NetworkManager[558]: <info>  (veth6388a9a): deactivating device (reason 'removed') [36]
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job release_interface(6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534) = OK (0)"
Aug 15 11:20:09 mercury NetworkManager[558]: <warn>  (docker0): failed to detach bridge port veth6388a9a
Aug 15 11:20:09 mercury dbus-daemon: dbus[470]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 15 11:20:09 mercury dbus[470]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 15 11:20:09 mercury systemd: Starting Network Manager Script Dispatcher Service...
Aug 15 11:20:09 mercury dbus-daemon: dbus[470]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Aug 15 11:20:09 mercury systemd: Started Network Manager Script Dispatcher Service.
Aug 15 11:20:09 mercury nm-dispatcher: Dispatching action 'down' for veth6388a9a
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job log(stop, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job log(stop, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest) = OK (0)"
...

If you need any further information, please tell me I would provide it if I can.

cfpeng commented 9 years ago

@aboch I have upgrade to 1.8.1, the issue is still exist.

abronan commented 9 years ago

/cc @LK4D4

aboch commented 9 years ago

@cfpeng @fkeet Just to make sure, can you please post the content of etc/resolv.conf inside your container.

Also the o/p of sudo iptables -t nat -L -nv on your host. Want to check whether the masquerade rule is there.

cfpeng commented 9 years ago

@aboch Here is:

$sudo iptables -t nat -L -nv Chain PREROUTING (policy ACCEPT 1944 packets, 117K bytes) pkts bytes target prot opt in out source destination 1929 117K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 1929 packets, 117K bytes) pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 1126 packets, 69647 bytes) pkts bytes target prot opt in out source destination 7 497 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 1119 packets, 69150 bytes) pkts bytes target prot opt in out source destination 22 1365 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0

Chain DOCKER (2 references) pkts bytes target prot opt in out source destination

$sudo docker run --rm -it ubuntu /bin/bash root@0aeb261357d1:/# cat /etc/resolv.conf

Generated by resolvconf

nameserver 8.8.8.8

aanm commented 9 years ago

@cfpeng do you have selinux?

cfpeng commented 9 years ago

@aanand no.

rcousens commented 9 years ago

I am encountering this issue too. I have to systemctl restart docker on Arch Linux to get access to the internet from within containers.

ajanssens commented 9 years ago

I'm running into the same problem, tried everything found on google but nothing fixed the issue.

$ sudo iptables -t nat -L -nv Chain PREROUTING (policy ACCEPT 47 packets, 3371 bytes) pkts bytes target prot opt in out source destination
6 423 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 6 packets, 423 bytes) pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 1132 packets, 128K bytes) pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 1132 packets, 128K bytes) pkts bytes target prot opt in out source destination
41 2948 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0

Chain DOCKER (2 references) pkts bytes target prot opt in out source destination

$ sudo docker run --rm -it ubuntu /bin/bash root@abca1b94e4dc:/# cat /etc/resolv.conf nameserver 8.8.8.8 nameserver 8.8.4.4

cfpeng commented 9 years ago

After I reinstall the OS, the problem is resolved.

lykhouzov commented 9 years ago

I had same error. restart of docker process is helped. It looks like there were some blocked processes after packages update.

poga commented 9 years ago

I've encountered a similar issue. Containers can't access the internet until a manual restart systemctl restart docker on Archlinux.

One thing I've noticed is when my computer just boot up. ip route does not contains the docker0 bridge.

Here's the output before restarting docker:

$ ip route
default via 192.168.0.1 dev wlp2s0  proto static  metric 600
192.168.0.0/24 dev wlp2s0  proto kernel  scope link  src 192.168.0.107  metric 600

After docker restarted:

$ ip route
default via 192.168.0.1 dev wlp2s0  proto static  metric 600
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.42.1
192.168.0.0/24 dev wlp2s0  proto kernel  scope link  src 192.168.0.107  metric 600

not sure if this would help.

bwinterton commented 9 years ago

@poga I just found this same issue, thank you for finding a solution! So are we thinking this is a problem with Docker, or with Arch?

poga commented 8 years ago

@bwinterton Sorry, I don't have enough knowledge to determine the source of problem. My guess is that something overrode the network bridge config after docker service started.

fbourigault commented 8 years ago

I also have the issue with all my Arch Linux. It looks like a race condition during system startup.

thaJeztah commented 8 years ago

@aboch is https://github.com/docker/docker/issues/13381#issuecomment-141410434 something to work with to resolve this?

srus commented 8 years ago

In Ubuntu 12.04 I had to change manually /etc/resolv.conf and set Google's DNS (8.8.8.8). But after a reboot, this file was updated automatically with the previous value, so now I have to change it again... Anyone has a better solution? I'm using DHCP, so /etc/resolv.conf is managed by the system.

hadim commented 8 years ago

I don't know why but without restarting server the following works, docker run -it --rm --net=host ubuntu ping 8.8.8.8 while this does not work, docker run -it --rm ubuntu ping 8.8.8.8

Using Ubuntu Server 14.04 as host.

cdepillabout commented 8 years ago

I'm experiencing the same problem on Arch and @hadim's solution works for me.

aboch commented 8 years ago

@poga Thanks for the info. It looks like if the docker0 bridge is removed, the veth interface in the container stays in link up state, but subsequent packets sent by the container would be dropped by linux given no ip address is configured on the host end veth.

This is very interesting issue and it's worth investigating whether linux removed docker0 or (I doubt) docker did.

But it surely is a different issue from the one originally reported by @cfpeng where docker0 is there (there is a capture session on it) and DNS requests originated by the container are being routed out the host and responses being received by docker0 bridge.

aboch commented 8 years ago

@hadim Provided you hit the same issue reported by @poga (docker0 bridge removed) your test observation is expected.

When you run a container in host mode networking, the container uses the host OS networking stack, it is not plugged into any bridge.

Instead when you do not specify the networking mode, the container uses the default mode which is the bridge network. Its eth0 interface is one end of a veth pair, the other end is plugged into the docker0 bridge. Given the bridge is not there, that end is left dangling.

AndreaGiardini commented 8 years ago

@hadim solution works for me as well

depay commented 8 years ago

i have the similar problem and restart daemon works..centos 7.1/docker1.7.1

  1. I've found that much veth still exists even container stopped with ifconfig. And i found the sys logs for every veth not detached, like NetworkManager[998]: <warn> (docker0): failed to detach bridge port veth56029e2. I don't know whether it has any relationship with the broken of bridge docker0. In this circumstance, network works well with --net host.
  2. Sometimes even worse, when i start a container with --net host there, there's only a lo dev in ifconfig, without eth0 or others at host !
iammerrick commented 8 years ago

I'm running into this as well with the prebuilt node package.

ebuildy commented 8 years ago

It looks a network routing problem, check with "ip route" or just "route" that IPs bridge are going somewhere... (even if all roads lead to Rome ^^)

depay commented 8 years ago

i've finally found the reason of my problem with bridge mode. ip_forward is set to 0 during restarting network, while docker daemon is living..Add net.ipv4.ip_forward=1 to /etc/sysctl.conf, then sysctl -p or restart network fixed that..

But the problem with host mode is still confusing..

surajx commented 8 years ago

I was trying to install python dependencies and could not connect to pypi, finally traced it to a DNS issue and stumbled upon this issue. Do let me know if any other logs/output is needed to pin down the problem.

Host

surajx@r2g~$ uname -a Linux r2g 3.19.0-33-generic #38~14.04.1-Ubuntu SMP Fri Nov 6 18:17:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

surajx@r2g:~$ docker version Client: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64

surajx@r2g:~$ docker info Containers: 34 Images: 64 Server Version: 1.9.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 132 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.19.0-33-generic Operating System: Ubuntu 14.04.3 LTS CPUs: 4 Total Memory: 15.56 GiB Name: r2g ID: QR3V:YCAJ:X37L:CBBB:D4RX:OOPX:YCPQ:NJ6P:V4QF:EVJY:NP5F:74JD WARNING: No swap limit support

Container

surajx@r2g:~$ docker run --rm -it ubuntu /bin/bash

root@dc5a47e875cb:/# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=13.5 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=13.6 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 13.500/13.571/13.643/0.136 ms

root@dc5a47e875cb:/# ping google.com ping: unknown host google.com

surajx commented 8 years ago

I simultaneously analyzed the traffic generated in wlan0 and docker0 interface while issuing ping google.com from inside the container. In the docker0 traffic no response from 8.8.8.8 was received (as expected), but something weird happened in the wlan0 traffic. It was forwarding the container's DNS request to both the public-DNS servers, but instead of a proper response the interface got flooded by malformed packets.

Hope this helps.

surajx@r2g:~$ sudo tcpdump -ni docker0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on docker0, link-type EN10MB (Ethernet), capture size 65535 bytes 17:31:18.927299 IP 172.17.0.2.53724 > 8.8.8.8.53: 9371+ A? google.com. (28) 17:31:23.932620 IP 172.17.0.2.47207 > 8.8.4.4.53: 9371+ A? google.com. (28) 17:31:23.938285 ARP, Request who-has 172.17.0.1 tell 172.17.0.2, length 28 17:31:23.938338 ARP, Reply 172.17.0.1 is-at 02:42:53:96:8f:9e, length 28 17:31:28.934345 IP 172.17.0.2.53724 > 8.8.8.8.53: 9371+ A? google.com. (28) 17:31:33.939625 IP 172.17.0.2.47207 > 8.8.4.4.53: 9371+ A? google.com. (28) 17:31:38.945046 IP 172.17.0.2.40407 > 8.8.8.8.53: 16090+ A? google.com.anu.edu.au. (39) 17:31:43.950322 IP 172.17.0.2.52317 > 8.8.4.4.53: 16090+ A? google.com.anu.edu.au. (39) 17:31:48.952601 IP 172.17.0.2.40407 > 8.8.8.8.53: 16090+ A? google.com.anu.edu.au. (39) 17:31:53.957837 IP 172.17.0.2.52317 > 8.8.4.4.53: 16090+ A? google.com.anu.edu.au. (39)

surajx@r2g:~$ sudo tcpdump -ni wlan0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on wlan0, link-type EN10MB (Ethernet), capture size 65535 bytes 17:31:18.927366 IP 130.56.238.99.53724 > 8.8.8.8.53: 9371+ A? google.com. (28) 17:31:23.932693 IP 130.56.238.99.47207 > 8.8.4.4.53: 9371+ A? google.com. (28) 17:31:28.934421 IP 130.56.238.99.53724 > 8.8.8.8.53: 9371+ A? google.com. (28) 17:31:33.939696 IP 130.56.238.99.47207 > 8.8.4.4.53: 9371+ A? google.com. (28) 17:31:37.815413 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d IP Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815430 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d IP Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815437 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x08 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815441 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x08 Information, send seq 0, rcv seq 16, Flags [Command], length 1242 17:31:37.815446 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x0a Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815450 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x0a Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815453 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x0c Information, send seq 0, rcv seq 16, Flags [Command], length 674 17:31:37.815457 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x0c Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815462 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d ProWay NM Information, send seq 0, rcv seq 16, Flags [Command], length 483 17:31:37.815465 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d ProWay NM Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815469 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x10 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815473 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x10 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815478 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x12 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815482 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x12 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815486 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x14 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.815490 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x14 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.818044 00:24:13:12:84:00 > 80:ea:96:42:58:6d Unknown DSAP 0x16 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.818062 00:24:13:12:84:00 > 80:ea:96:42:58:6d Unknown DSAP 0x16 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.818070 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x18 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 17:31:37.818075 00:24:13:12:84:00 Unknown SSAP 0x16 > 80:ea:96:42:58:6d Unknown DSAP 0x18 Information, send seq 0, rcv seq 16, Flags [Command], length 1450 ... (1000+ more lines of the same thing)

surajx commented 8 years ago

So did some more digging and seems like my university net connection restricts the DNS server I can use. I changed my host machine DNS to 8.8.8.8 and couldn't ping google.com from my host machine. In fact I couldn't use any public DNS as my machine's default DNS.

I used the default DNS ip provisioned by my university connection and lo and behold, I'm able to ping google.com from inside the container.

surajx@r2g:~$ docker run --dns=150.203.1.10 -it ubuntu /bin/bash root@f6db3afaf70d:/# ping google.com PING google.com (203.5.76.208) 56(84) bytes of data. 64 bytes from cache.google.com (203.5.76.208): icmp_seq=1 ttl=54 time=6.78 ms 64 bytes from cache.google.com (203.5.76.208): icmp_seq=2 ttl=54 time=6.16 ms 64 bytes from cache.google.com (203.5.76.208): icmp_seq=3 ttl=54 time=6.09 ms 64 bytes from cache.google.com (203.5.76.208): icmp_seq=4 ttl=54 time=6.02 ms ^C --- google.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 6.025/6.270/6.789/0.303 ms

Looks like my issue is not docker related.

Thanks, Suraj

svenmueller commented 8 years ago

This also happens for us.

What i noticed is that the "gateway" entry for the bridge subnet is missing for docker nodes who have that kind of internet connection issues. How can it happen that the "gateway" entry disappears?

$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "1b913195b86713e97d28869ac0cf0db7c9faef9c9bea9b26f14d6da971281d00",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {
            "02f757d0c326b9741c21d7bcbe9cc03c9c4f3dd491f8b5068b73372dea5f392c": {
                "EndpointID": "9bc8c2574de8785b4fa67926c297711149c4c59357eae6e857158511ff827f7a",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "0b14a198fd3cc1384f39c0e146be4f1a770a4a2bdd1eb77a68ffa62b424c0c4a": {
                "EndpointID": "71c543b1be46b1823dc2bdce2f8d01a85448136d0adc930bab81157f827805a6",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "329d729b53c90d57859e25ae11069d6cab26c912c7d7f22804554a0f3d96b238": {
                "EndpointID": "089286a247e9a6a20a5b732298626c6190883c7478f7c51243f653b4661f9d15",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "7aca0b44c1b4fa4c7481dd590c29de52a4a3b0fe2501ef60d5d64e4210f9e132": {
                "EndpointID": "e096ec2176cefa967bca055b59316ef59413e736b5671cab0c49791a16add176",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "c7eaea1604324c7074410b69412a9a21edb872987909ea3779c46bf663e75d7f": {
                "EndpointID": "e4e98c92ba2e04577e122b9345f11cbf77eaff44020632a4c1dd4bb61ebea685",
                "MacAddress": "02:42:ac:11:00:08",
                "IPv4Address": "172.17.0.8/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    }
]
$ docker version
Client:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   76d6bc9
 Built:        Tue Nov  3 19:20:09 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   76d6bc9
 Built:        Tue Nov  3 19:20:09 UTC 2015
 OS/Arch:      linux/amd64
$ docker info
Containers: 6
Images: 89
Server Version: 1.9.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 101
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-71-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 2
Total Memory: 1.95 GiB
Name: xxx.yyyy.com
ID: OXYW:SHGU:TR5X:F3IH:KDON:EWI2:WLVU:E3PS:A7UY:HD3W:GLEY:YB2D
Username: xyz
Registry: https://index.docker.io/v1/
$ uname -a
Linux xxx.yyyy.com 3.13.0-71-generic #114-Ubuntu SMP Tue Dec 1 02:34:22 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
ghost commented 8 years ago

I got the same issues. this may because the default dns server , 8.8.8.8 8.8.4.4 I add more dns server to resolve this problem

docker run --dns=208.67.222.222 --dns=208.67.220.220 --dns=8.8.8.8 --dns=8.8.4.4 golang /bin/bash
svenmueller commented 8 years ago

In my case it's not a DNS issue...pinging the IP (e.g. 8.8.8.8) results in no packages being transmitted.

Does anyone have an idea why the "gateway" entry goes missing for the bridge network?

thaJeztah commented 8 years ago

Does anyone have an idea why the "gateway" entry goes missing for the bridge network?

ping @aboch perhaps you have an idea (sorry for reeling you in again :))

aboch commented 8 years ago

@svenmueller

Thanks for reporting this. Yes it is quite strange the gateway entry is missing from that output. I will check on that.

What does the default gateway setting look like inside the container when you see the issue ? Is it really not programmed in there ? Thanks.

svenmueller commented 8 years ago

from inside the container:

/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C
--- 8.8.8.8 ping statistics ---
148 packets transmitted, 0 packets received, 100% packet loss
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

outside the container:

$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "7bf4fc513e22e786b52efe56d340e16674fed7f775ccd06207f9a35a33753e78",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {
            "fab8b7aaee048f1cf9e22e185dae530fe2c299ee8f601535adae58a114ea7840": {
                "EndpointID": "fed0fad60a7744477c4e12656670233cab76e701c9c38dc899509cb0f8a25360",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    }
]
aboch commented 8 years ago

Thanks @svenmueller for the extra info. I don't think the gateway entry missing from the inspect o/p is relevant to the connectivity problem, as the default gateway is in fact programmed in the container.

You may want to packet capture on the host to see whether the ping packets are making their way out of the host and whether the response ones are coming in or being dropped by some iptables rule.

svenmueller commented 8 years ago

@aboch Thx for the feedback. How can i do that exactly?