Open igorastds opened 8 years ago
same here with docker v1.9.1 ("a34a1d5-dirty") @ Arch
:+1: please fix this bug
is there a workaround for this?
maybe it's an iptables bug. i have no other iptables rules, besides the docker rules, 2 networks (docker0 + 1 other). My host has the ip 172.16.0.1/16.
# iptables-save
# Generated by iptables-save v1.4.21 on Sun Jan 17 10:28:50 2016
*nat
:PREROUTING ACCEPT [8794:1217326]
:INPUT ACCEPT [8773:1215606]
:OUTPUT ACCEPT [17899:2751833]
:POSTROUTING ACCEPT [17898:2752201]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.19.0.0/16 ! -o br-bcfe92869fd5 -j MASQUERADE
COMMIT
# Completed on Sun Jan 17 10:28:50 2016
# Generated by iptables-save v1.4.21 on Sun Jan 17 10:28:50 2016
*filter
:INPUT ACCEPT [1364822:334302629]
:FORWARD ACCEPT [13:988]
:OUTPUT ACCEPT [1227632:7107850879]
:DOCKER - [0:0]
-A FORWARD -s 172.19.0.0/16 -d 172.17.0.0/16 -j DROP
-A FORWARD -s 172.17.0.0/16 -d 172.19.0.0/16 -j DROP
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-bcfe92869fd5 -j DOCKER
-A FORWARD -o br-bcfe92869fd5 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i br-bcfe92869fd5 ! -o br-bcfe92869fd5 -j ACCEPT
-A FORWARD -i br-bcfe92869fd5 -o br-bcfe92869fd5 -j ACCEPT
COMMIT
# Completed on Sun Jan 17 10:28:50 2016
ping @mrjana @sanimej @aboch Could you pls try and close or fix issue?
Unbelievable, bug still in current docker version and no one cares from docker dev team (@thaJeztah).
@TimothyKlim sorry about that. this fell out of radar. We will take a look at it soon.
1 month later and still no progress
I'm having a [possibly] related issue.
My app receives UDP syslog messages from most sources with the original source address preserved, but one device has the source rewritten to the docker0 interface's IP (seen by running tcpdump on docker0).
I tried disabling the userland proxy as an experiment. The ones that were working before still are, but that one source never makes it to docker0 now, which is obviously concerning.
Ubuntu 16.04.2, docker 1.12.6, docker-compose 1.8.0, launched w/ docker-compose up
EDIT: Running conntrack -D -p udp
seems to have fixed it for me locally, so potentially unrelated.
The problem happens because host has multiple network interfaces(eth0
and docker0
in this case), and when replying UDP, server choose the wrong IP as the source address.
Running tcpdump
to capture packets on the wire gives:
14:42:11.861765 IP 172.17.0.3.37218 > 172.16.13.13.5151: UDP, length 6
14:42:11.863471 IP 172.17.0.1.5151 > 172.17.0.3.37218: UDP, length 6
14:42:11.863520 IP 172.17.0.3 > 172.17.0.1: ICMP 172.17.0.3 udp port 37218 unreachable, length 42
172.17.0.1
(docker0 ip address), instead of 172.16.13.13
as we expectedAs we can see the real cause is: the second packet chooses the wrong source address! And somehow, client does not recognize it, thinks it is INVALID, does not pass it to the application, but instead replies with a ICMP packet.
Choosing packet source address is kernel's work, for UDP, by default it will use the IP address of the interface the packet will be sent to. And as to why kernel drops the second packet, is because netfilter
records valid UDP connection(since UDP is connectionless, it only means netfilter
stores <sourcess IP, source Port, destination IP, destination Port>), and drops those that are invalid(not in connection table).
As for the first problem, socket can actually control which source address should be used with a combination of sendmsg
and IP_PKTINFO
socket option. See how powerDNS does it: https://blog.powerdns.com/2012/10/08/on-binding-datagram-udp-sockets-to-the-any-addresses/
In conclusion, this is not a problem caused by docker(although it does create a docker0
which is the beginning of it) . And the solution comes in many ways:
I don't know if my issue is link or not but I can't send or receiver udp message in the docker container, works fine with tcp.
nc -l -u -p 8888 nc -u 127.0.0.1 8888
I am using EXPOSE 8888/udp or -p 8888:8888/udp
Netcat works fine with tcp so I don't get why it doesn't work over udp. I am using osx sierra.
Also hitting this issue.. any fix for this ?
running osx Sierra too
+1
+1
+1
+1
+1
mark
I am running into this using ntplib
in Python and OpenNTPD using this docker image.
* use TCP instead of UDP, because TCP is connected protocol, kernel will use the right address * server should listen on a specific IP address * Change server socket logic, please refer to: https://github.com/netoptimizer/network-testing/blob/master/src/udp_example02.c
I can't use TCP -- NTP runs over UDP. I don't want to change the server or client socket logic -- they are third party code and should be working fine. How should I make the server listen on a specific address then?
Reply to myself: Publish the exposed port and IP address, as exlained here
docker run -p 10.0.0.3:80:8080 nginx
+1
I ran dns server on host and make container use this dns server on the same server.
If I don't use use-vc
option in /etc/resolv.conf
(which means dns query will use tcp), ping command fails.
+1
+1
+1
++1
The problem happens because host has multiple network interfaces(
eth0
anddocker0
in this case), and when replying UDP, server choose the wrong IP as the source address.Running
tcpdump
to capture packets on the wire gives:14:42:11.861765 IP 172.17.0.3.37218 > 172.16.13.13.5151: UDP, length 6 14:42:11.863471 IP 172.17.0.1.5151 > 172.17.0.3.37218: UDP, length 6 14:42:11.863520 IP 172.17.0.3 > 172.17.0.1: ICMP 172.17.0.3 udp port 37218 unreachable, length 42
- The first packet means client sends a UDP request to server, with source address 172.17.0.3(container IP address), and destination address 172.16.13.13(host eth0 address)
- The second packets is the response, however, source address becomes
172.17.0.1
(docker0 ip address), instead of172.16.13.13
as we expected- The third packet is sent by client, telling server that the second packet is INVALID
As we can see the real cause is: the second packet chooses the wrong source address! And somehow, client does not recognize it, thinks it is INVALID, does not pass it to the application, but instead replies with a ICMP packet.
Choosing packet source address is kernel's work, for UDP, by default it will use the IP address of the interface the packet will be sent to. And as to why kernel drops the second packet, is because
netfilter
records valid UDP connection(since UDP is connectionless, it only meansnetfilter
stores <sourcess IP, source Port, destination IP, destination Port>), and drops those that are invalid(not in connection table).As for the first problem, socket can actually control which source address should be used with a combination of
sendmsg
andIP_PKTINFO
socket option. See how powerDNS does it: https://blog.powerdns.com/2012/10/08/on-binding-datagram-udp-sockets-to-the-any-addresses/In conclusion, this is not a problem caused by docker(although it does create a
docker0
which is the beginning of it) . And the solution comes in many ways:
- use TCP instead of UDP, because TCP is connected protocol, kernel will use the right address
- server should listen on a specific IP address
- Change server socket logic, please refer to: https://github.com/netoptimizer/network-testing/blob/master/src/udp_example02.c
Thank you! - I disabled the second network adapter on my host and my UDP issue went away!
I think the actual issue lies in the proxy implementation, see moby/libnetwork#1729.
Just hit this same issue. enabling use-vc
didn't work for me, but if i do nc -vz <host-ip> 53
all dns then works.
running dnsmasq on the host
resolv.conf inside docker looks like:
nameserver 127.0.0.11
options ndots:0
i don't really know why the use-vc
option doesn't work for me, even if i change nameserver
to the ipv4 of the host
Apparently my issue was I needed to bind dnsmasq to a specific IP address not 0.0.0.0 as per https://blog.powerdns.com/2012/10/08/on-binding-datagram-udp-sockets-to-the-any-addresses/
I got an issue with UDP packets and a docker-container. In my case, I solved that in my docker-compose file adding "/udp" in the ports section.
ports:
"5223:5223/udp"
Source: https://docs.docker.com/config/containers/container-networking/
i got the same issue when connect the udp port of container, maybe docker forward the tcp connections by default, if want to connect udp port of container, u should add one forward rule to iptables's PREROUTING chain which under the type nat.
eg:
iptables -t nat -A PREROUTING -p udp --dport 1234 -j DNAT --to container-ip:1234
In my case, using Technitium DNS on host mode, changing the "DNS Server Local End Points" from 0.0.0.0:53 to [ip]:53 allowed the containers to get DNS from UDP.
I got an issue with UDP packets and a docker-container. In my case, I solved that in my docker-compose file adding "/udp" in the ports section.
ports: "5223:5223/udp"
Source: docs.docker.com/config/containers/container-networking
@developez, are you sure you have the same issue as in this thread, or different UDP issue? Using ports
like that will make docker bind to the port. This means other applications on the host won't be able to bind to that port? I'm getting "connection refused" errors from my UDP server. Remember, the title of the issue says "UDP server being located at Docker host".
I don't think there's a workaround (to UDP) to this bug yet.
Description of problem:
When issuing UDP connections from docker container to docker host, no incoming UDP packets are received in container. UDP works for accessing external resources. TCP works for accessing docker host.
docker version
:Mostly tested on
Client: Version: 1.8.0-dev API version: 1.20 Go version: go1.4.2 Git commit: b900aaa Built: Fri Jul 17 15:15:47 UTC 2015
Still actual in:
Go version: go1.4.2 Git commit: 9d3ad6d Built: Wed Jul 29 16:26:04 UTC 2015
docker info
:Containers: 283 Images: 687 Storage Driver: overlay Backing Filesystem: extfs Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 4.1.2███████ Operating System: Gentoo/Linux CPUs: 8 Total Memory: 7.746 GiB Name: ███████ ID: 7WYK:23Z2:J7KR:JFF2:QMLD:E5Z6:FF5O:LAAA:XOW4:56ZR:BOSD:WWAI
uname -a
: Linux ███████ 4.1.2-███████ #2 SMP Mon Jul 20 14:13:03 CEST 2015 x86_64 Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz GenuineIntel GNU/LinuxEnvironment details (AWS, VirtualBox, physical, etc.): physical
How reproducible:
Steps to Reproduce:
ncat -e /bin/cat -k -u -l 5151
, also I rechecked withudpqotd
, yeah that one from old Perl Cookbook) on Docker host outside of Docker (not tested starting server in docker container)Actual Results:
Expected Results:
Additional info: