Closed morsmodre closed 9 years ago
I realise the problem was the iptables blocking the br1 trafic:
$$ sudo iptables -nvL
...
Chain FORAWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.7 tcp dpt:9160
0 0 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.7 tcp dpt:9042
679K 1760M ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
623K 33M ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
4 336 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
The br1 have forwarding rules as the docker0. Pipework aparently leaves this to the user and probably some sort of warning should be placed in the documentation. to solve this, in my case, just add the following rules: $$ iptables -A FORWARD -i br1 ! -o br1 -j ACCEPT $$ iptables -A FORWARD -i br1 -o br1 -j ACCEPT
to mimick the ones for docker0.
Indeed, the bridged traffic goes through the FORWARD
chain (which is not obvious, since bridging is L2 and iptables is L3...)
I'll leave this issue open for future reference; if someone wants to write a paragraph for the documentation, that will be very welcome!
Closing older issues.
I'm trying to connect two containers in pipework. I run the containers with
sudo docker run --privileged -d --dns 127.0.0.1 -h my_host -t image:tag /usr/bin/wait_script
The /usr/bin/wait_script is on the container image and performs an pipework --wait and dnsmasq. The containers are set so that the IPs are 192.168.100.1 and 192.168.100.2.
After starting the containers I do: $$ sudo pipework br1 cid $IP/24 where the $IP is the container IP.
In the containers I see: $$ ifconfig eth0 Link encap:Ethernet HWaddr 8E:33:7E:B9:8B:FC
inet addr:172.17.0.111 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::8c33:7eff:feb9:8bfc/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:12 errors:0 dropped:2 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:948 (948.0 b) TX bytes:648 (648.0 b)
eth1 Link encap:Ethernet HWaddr EE:38:2B:A8:27:7F
inet addr:192.168.100.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::ec38:2bff:fea8:277f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:62 errors:0 dropped:0 overruns:0 frame:0 TX packets:23 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9066 (8.8 KiB) TX bytes:1950 (1.9 KiB) $$ ip route default via 172.17.42.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.127 192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.2
I know now if the default has something to do with it, but wouldn't have since when I change it id doesn't work either.
In the host I get: $$ ifconfig br1 Link encap:Ethernet HWaddr 0a:e3:28:c1:ba:9f
inet6 addr: fe80::a8:7dff:fecc:4d10/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:11 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:1860 (1.8 KB)
docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:307930 errors:0 dropped:0 overruns:0 frame:0 TX packets:335908 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16359857 (16.3 MB) TX bytes:889708726 (889.7 MB)
eth0 Link encap:Ethernet HWaddr 40:16:7e:64:d5:5d
inet addr:192.168.2.11 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::4216:7eff:fe64:d55d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2058289 errors:0 dropped:0 overruns:0 frame:0 TX packets:985402 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1812426206 (1.8 GB) TX bytes:179195358 (179.1 MB)
If you notice the br1 doesn't have a inet addr with an IPv4 IP address. The containers cannot comunicate with each other. Though the pipework IP (192.168.100.X) but can though the default on 172.17.42.X, which is linkied to the docker0 bridge.
My scripts are based on nicolasff docker-cassandra scripts: https://github.com/nicolasff/docker-cassandra/blob/master/start-cluster.sh
Can you give me some pointers on what's wrong?