Closed evmoroz closed 2 years ago
I have made some progress:
version: "3.8"
networks:
foo:
driver: bridge
services:
foo:
image: alpine
command: [ "tail", "-f", "/dev/null" ]
networks:
foo:
default:
in this configuration container will get 2 ip addresses and it is routable only through the default
network, I was expecting it to also be accessible via the foo
network ip address (Since this is the behaviour on linux).
I also see that the routes are created correctly:
Adding route for 172.19.0.0/16 -> utun0 (tmp_default)
Adding route for 172.17.0.0/16 -> utun0 (bridge)
Adding route for 172.18.0.0/16 -> utun0 (tmp_foo)
but pings on 172.18.0.2
result in timeout
I tried disabling iptables in docker, but this did not help
@ttyz Thanks for reaching out with your issue. I'm going to try to reproduce it on my end and then provide an update.
Just an update, I was able to reproduce this using the following steps:
Create docker-compose.yml
with contents:
version: "3.8"
networks:
net1:
name: net1
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.0.0/24
gateway: 172.30.0.1
net2:
name: net2
driver: bridge
ipam:
driver: default
config:
- subnet: 172.31.0.0/24
gateway: 172.31.0.1
services:
foo:
image: nginx
container_name: foo
networks:
# Attach foo to both networks
net1:
net2:
bar:
image: nginx
container_name: bar
networks:
# Attach bar to only the second network
net2:
$ docker-compose up -d
[+] Running 4/4
⠿ Network net2 Created
⠿ Network net1 Created
⠿ Container bar Started
⠿ Container foo Started
foo
and bar
:
$ docker inspect foo --format '{{.NetworkSettings.Networks.net1.IPAddress}}'
172.30.0.2
$ docker inspect foo --format '{{.NetworkSettings.Networks.net2.IPAddress}}'
172.31.0.3
$ docker inspect bar --format '{{.NetworkSettings.Networks.net2.IPAddress}}'
172.31.0.2
foo
on net1
succeeds:
$ ping 172.30.0.2
PING 172.30.0.2 (172.30.0.2): 56 data bytes
64 bytes from 172.30.0.2: icmp_seq=0 ttl=63 time=0.726 ms
64 bytes from 172.30.0.2: icmp_seq=1 ttl=63 time=0.997 ms
64 bytes from 172.30.0.2: icmp_seq=2 ttl=63 time=0.756 ms
64 bytes from 172.30.0.2: icmp_seq=3 ttl=63 time=1.017 ms
^C
--- 172.30.0.2 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.726/0.874/1.017/0.134 ms
bar
on net2
succeeds:
$ ping 172.31.0.2
PING 172.31.0.2 (172.31.0.2): 56 data bytes
64 bytes from 172.31.0.2: icmp_seq=0 ttl=63 time=0.910 ms
64 bytes from 172.31.0.2: icmp_seq=1 ttl=63 time=0.882 ms
64 bytes from 172.31.0.2: icmp_seq=2 ttl=63 time=0.642 ms
64 bytes from 172.31.0.2: icmp_seq=3 ttl=63 time=1.028 ms
^C
--- 172.31.0.2 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.642/0.866/1.028/0.140 ms
foo
on net2
fails:
$ ping 172.31.0.3
PING 172.31.0.3 (172.31.0.3): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
^C
--- 172.31.0.3 ping statistics ---
5 packets transmitted, 0 packets received, 100.0% packet loss
The fact that foo
is a part of two networks (net1
and net2
) and is only reachable on the first network (net1
, not net2
), but bar
is part of the second network (net2
) and is reachable on there leads me to believe the issue must be related to foo
being part of multiple networks vs. some issue with net2
specifically.
I will continue to dig and send updates as I make progress.
Interestingly, pinging foo
on net2
from within the Linux VM succeeds:
$ docker run --rm --net host wbitt/network-multitool ping 172.31.0.3
The directory /usr/share/nginx/html is not mounted.
Therefore, over-writing the default index.html file with some useful information:
WBITT Network MultiTool (with NGINX) - docker-desktop - 192.168.65.3 - HTTP: 80 , HTTPS: 443 . (Formerly praqma/network-multitool)
PING 172.31.0.3 (172.31.0.3) 56(84) bytes of data.
64 bytes from 172.31.0.3: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 172.31.0.3: icmp_seq=2 ttl=64 time=0.086 ms
64 bytes from 172.31.0.3: icmp_seq=3 ttl=64 time=0.072 ms
64 bytes from 172.31.0.3: icmp_seq=4 ttl=64 time=0.086 ms
^C
--- 172.31.0.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3070ms
rtt min/avg/max/mdev = 0.054/0.074/0.086/0.013 ms
I've determined the root cause of the problem. Packets from the macOS host going to the container through net2
successfully make their way to the container, but reply packets are routed back through the wrong interface (net1
instead of net2
). This is because no route exists in the container for the macOS WireGuard IP 10.33.33.1
which results in the packet going through the default route, and this happens to be net1
. The packet ends up getting dropped on its way back because it goes through the wrong interface.
The solution is pretty simple, we just need to add an iptables
NAT MASQUERADE rule on the Linux host for 10.33.33.1
so that the source IP on incoming packets get translated to the corresponding IP of each Docker network interface on the Linux host. ie:
$ docker run --rm -it --net host --privileged wbitt/network-multitool iptables -t nat -A POSTROUTING -s 10.33.33.1 -j MASQUERADE
I'll add in this logic shortly and let you know when that's done.
@ttyz This should been fixed now in v0.1.2. You can upgrade by running:
$ brew install chipmk/tap/docker-mac-net-connect
Can you please confirm that this is fixed on your end?
I can confirm, everything now works as expected, thank you for your help 👍
I had everything setup and working but after a restart, I get requests timeout when trying to access the containers. Wireguard interface is up and accessible from host:
But containers are not:
The log does not show anything particularly interesting to me:
Can you maybe give me a hint on debugging the issue? I am running Docker Desktop 4.3.2 (72729)