Open squaremo opened 9 years ago
There is one clear alternative here: using another bridge device
Another alternative is to wait and see if https://github.com/docker/docker/pull/13441 ends up supporting multiple endpoints supplied to docker run
; but, I am not hopeful that it will. Besides, this indirectly puts the responsibility for the proper operation of the plugin in the hands of the user.
There is one clear alternative here: using another bridge device, on which weaveDNS listens, and on which each container is given an interface.
The pain with this is that it needs to work with weaveDNS; i.e., weaveDNS needs to be told to use this bridge when it is run, and probably this would not play well with deployments in which weaveDNS is used for containers not using the plugin.
Adam points out that there are plans afoot to merge weaveDNS and the router into one container. Presumably this would rely on the weave container being given an interface on whichever bridge is in use -- so there is an element of chicken-and-egg to this.
Downside of having all containers on a host share a "dns" bridge is that it would potentially break isolation between containers on different networks but on the same host.
Alternative could be to have multiple interfaces for dns, one per subnet...
Downside of having all containers on a host share a "dns" bridge
Yes, although this is presently the case anyway.
Alternative could be to have multiple interfaces for dns, one per subnet
Hmmmmm this seems like a lot of trouble, although better it works than not, if it's the only viable means.
@rade suggests that static routes on ethwe (in each container) and a catch-all-weave-addresses route (in the weavedns container) might work.
As an experiment, I did this:
./weave launch -iprange 10.20.0.0/16
./weave launch-dns 10.254.254.1/24 -debug
./bin/docker-ns weavedns ip route add 10.20.0.0/16 dev ethwe
# now start a container on the weave network, but give it weavedns's weave address
C1=$(./weave run -ti --dns=10.254.254.1 ubuntu)
# give it a route to weavedns
./bin/docker-ns $C1 ip route add 10.254.254.1/32 dev ethwe
# try resolving something
docker exec $C1 ping www.google.com
and it seems to work. This suggests a plan: At least when weaveDNS is merged with weaver, possibly before, modify it to use its weave address and static routes on ethwe instead of the docker bridge.
Trickiness: a route on the weavedns ethwe would be needed for each subnet used by a container.
@awh has done some very useful investigation into giving the weave bridge an IP and using that as the nameserver IP for containers. The motivation for doing so is that the IP will be much more stable (e.g., across restarts of weavedns).
These are the required steps:
Harder than it sounds. There are two requirements:
nameserver x.x.x.x
in the container's/etc/resolv.conf
x.x.x.x
.At the minute, there's no provision for a libnetwork driver to do anything with
/etc/resolv.conf
-- or rather, there is a field in the driverapi, but no machinery to do anything with it. https://github.com/docker/libnetwork/pull/212 may fix this in part, if it lands and gets things right. (Currently it attributes primacy to an endpoint, regardless of where it is used; rather, it should nominate a primary endpoint for each sandbox. But then, what happens when you remove that endpoint? Who knows)The second problem is perhaps more tricky, since it requires the container to have another interface on which to talk to weaveDNS (or, for weaveDNS to operate differently. Somehow.)
There is one clear alternative here: using another bridge device, on which weaveDNS listens, and on which each container is given an interface. This would require the recapitulation of the libnetwork bridge driver (allocating IPs and so on).