Closed chengwang86 closed 7 years ago
It runs deeper than just not being shown in ifconfig. Container network firewall configuration gets the interface names directly from the network endpoint configurations. My guess is that the two network endpoints are correctly configured, but they both have the same interface name by some oversight.
@willsherwood Following your comments above, if two bridge networks happen to have the same interface names (e.g., both are eth0
), the firewall rules set for eth0
will work for both networks, right?
That is, pinging a container on network1 will have the same rules as pinging another container on network2. If this is the case, the two networks should still function normally.
But the customers will not be able to set different firewall rules for different networks, since all they see is eth0
. Am I right?
@corrieb @hickeng Any thoughts? I thought it was just a minor issue; but now it seems to be hurting our customers if my comments are correct. Note that currently this issue is not in release 1.2
hmm, I created a container which was connected to two bridge networks on vic (after applying PR #5991 )
/ # chroot /.tether /iptables -L -v
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1 36 VIC all -- any any anywhere anywhere
0 0 ACCEPT all -- lo any anywhere anywhere
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- lo lo anywhere anywhere
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 VIC all -- any any anywhere anywhere
0 0 ACCEPT all -- any lo anywhere anywhere
Chain VIC (2 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- eth0 any anywhere anywhere state RELATED,ESTABLISHED
0 0 ACCEPT all -- any eth0 anywhere anywhere
0 0 ACCEPT all -- eth0 any 172.18.0.0/16 anywhere
0 0 ACCEPT all -- eth0 any anywhere anywhere state RELATED,ESTABLISHED
0 0 ACCEPT all -- any eth0 anywhere anywhere
0 0 ACCEPT all -- eth0 any 172.17.0.0/16 anywhere
0 0 RETURN all -- any any anywhere anywhere
Both bridge networks show up in the iptable rules.
From the tether.debug
of a container connected to two bridge networks, I find the following
Aug 10 2017 20:18:51.438Z DEBUG [BEGIN] [github.com/vmware/vic/lib/tether.(*BaseOperations).Apply:529] applying endpoint configuration for b1
Aug 10 2017 20:18:51.448Z DEBUG got link name: "eth0"
Aug 10 2017 20:18:51.458Z DEBUG &{Common:{ExecutionEnvironment: ID:192 Name: Notes:} Static:false IP:172.17.0.2/16 Assigned:{IP:<nil> Mask:<nil>} Network: DHCP:<nil> Ports:[] configured:false}
Aug 10 2017 20:18:51.470Z INFO setting ip address 172.17.0.2/16 for link eth0
Aug 10 2017 20:18:51.476Z DEBUG added address 172.17.0.2/16 to link eth0
Aug 10 2017 20:18:51.666Z DEBUG [BEGIN] [github.com/vmware/vic/lib/tether.(*BaseOperations).Apply:529] applying endpoint configuration for b2
Aug 10 2017 20:18:51.675Z DEBUG got link name: "eth0"
Aug 10 2017 20:18:51.680Z DEBUG &{Common:{ExecutionEnvironment: ID:192 Name: Notes:} Static:false IP:172.18.0.2/16 Assigned:{IP:<nil> Mask:<nil>} Network: DHCP:<nil> Ports:[] configured:false}
Aug 10 2017 20:18:51.692Z INFO setting ip address 172.18.0.2/16 for link eth0
Aug 10 2017 20:18:51.698Z DEBUG added address 172.18.0.2/16 to link eth0
So we can confirm that two networks got the same link name eth0
.
After further investigation, I find that the link name is generated based on the network endpoint's ID. In this case, both networks have the same ID, e.g., &{Common:{ExecutionEnvironment: ID:192 Name: Notes:}
Maybe we are doing this intentionally to reuse the same NIC for multiple bridge networks.
We have a test case here https://github.com/vmware/vic/blob/master/tests/test-cases/Group1-Docker-Commands/1-17-Docker-Network-Connect.robot wherein a container is connected to two bridge networks and we expect that ip -4 addr show eth0
would output something like
inet 172.16.0.2/16 scope global eth0
inet 172.19.0.2/16 scope global eth0
An attempt at a release note:
ifconfig
only shows eth0
when a container connects to multiple bridge networks. #5990ifconfig
on that container only returns the details of eth0
. The networks function correctly, but the output information of ifconfig
is incomplete.@chengwang86 is this OK? Thanks!
@stuclem That looks great :) Thanks!
Thanks @chengwang86
@stuclem @chengwang86 the ifconfig
or ip link show
will only show one interface because there is only one interface. While it is possible to have multiple NICs on the same Ethernet domain it’s involved and generally falls under Link Aggregation.
We explicitly do not use multiple nics - the network id Cheng saw is actually for the port group which is the Ethernet broadcast domain that all bridge networks share. We use IP to segregate that portgroup into multiple docker bridge networks as seen in the network ls
command output, but at an Ethernet level there is only one network.
If you use ip addr show
you will see multiple IPs aliased to the eth0 interface. If you add a container-network you will see a second nic as the container-network is a different portgroup (therefore different Ethernet broadcast domain).
In summary (@mlh78750 correct my terminology if wrong):
docker network ls
/ ip addr show- Layer3
container networks, and bridge portgroup / eth0 / ip link show` - Layer2
vSwitch - Layer1
@hickeng are you saying that this is by design and therefore not a bug? If so, then should I remove this from the release notes and document it as the normal behaviour for ifconfig
in the core docs? Please confirm.
@stuclem correct - this is not a bug and is expected behaviour. I don't think we should document ipconfig
output in isolation, but we could as part of documentation describing how docker network
networks work, and what the bridge
interface is used for.
@emeirell has produced some good blog posts covering some of this - you may be able to work with him to distill appropriate wording if you need another source.
Closing as not-a-bug/working as expected.
@hickeng is right. No matter how many bridges networks the container will be connected to, it will always be uplink to the same interface (eth0). I wrote a post about this behavior a while ago. http://www.justait.net/2017/06/vic-userdefined.html
Thanks @emeirell and @hickeng. Moving this to vic-product for documentation.
Issue moved to vmware/vic-product #921 via ZenHub
For bug reports, please include the information below:
VIC version:
Latest code on 08/10/2017.
Deployment details:
What was the vic-machine create command used to deploy the VCH?
Steps to reproduce:
Then within the container test1, run
ifconfig
.Actual behavior: The output of
ifconfig
is:Here eth0 is
bridge1
.Expected behavior:
Additional details as necessary: If I create another container on eth1 (bridge2), e.g.,
docker create -it --name test2 --net bridge2 busybox
, this new container is still able to reach the first container viaping test1
. So it seems like the networks function normally but the output information ofifconfig
is incomplete.