Open rade opened 9 years ago
If someone runs containers with --net=none
, would they necessarily expect to get external access (unless they do something themselves achieve that, such as setting the default route to use the weave network)?
To put it another way, what's the point of replicating the functionality of the docker bridge?
So it might help to say what the use case for this is/are. If this is related to objections to having two network devices in containers (eth0 and ethwe), then that should not be conflated with --net=none
. For example, it could be achieved by attaching the docker bridge to the weave network, which is a feature users often seem to want. Whether that would meet all the requirements of users who object to having two interfaces, I don't know, but maybe it's worth considering.
Only having a single network interface is a distinct requirement, not yet captured in an issue.
what's the point of replicating the functionality of the docker bridge?
a) insulate weave from the highly volatile Docker networking, and b) lay the groundwork for using weave w/o Docker.
Btw, I agree that the meaning of --net=none
is not clear. It could mean "no networking at all" (in which case weave networking should be disabled too), or "no Docker networking" (which is the interpretation I am choosing here).
Btw, I agree that the meaning of --net=none is not clear. It could mean "no networking at all" (in which case weave networking should be disabled too),
I don't expect it to be a common scenario, but it seems entirely reasonable to me for someone to want to have a weave network but no docker network (and no automatically-provided substitute for it, i.e. no external access etc.). That seems to me like a natural interpretation of what using the weave proxy / weave run
with --net=none
should do. If one doesn't want any networking at all, use --net=none
and avoid weave net entirely.
Our weave setup does indeed work like you described here. We create a network namespace with --net=none
inject the weave interface and then start the actual container. We also run a default gateway within the weave network that has external connectivity.
The reason we are doing this is that services tend to bind to the wrong interface. It's important to have a single interface that is created before the container is started.
We also run a default gateway within the weave network that has external connectivity.
Can you explain what you've done there? Is this something we could get weave to do? Ostensibly that is what this issue is about.
It's important to have a single interface that is created before the container is started.
Is that the only motivation for what you are doing?
Can you explain what you've done there? Is this something we could get weave to do? Ostensibly that is what this issue is about.
We run an alpine container with both the docker and weave interface and iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
. And we add a default route to each of the other containers with the ip address of the alpine container.
Is that the only motivation for what you are doing?
yes. is there a better/simpler way to do this?
We run an alpine container with both the docker and weave interface and
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
I see. That would give containers access to the outside world but not vice versa, right? i.e. you are not doing anything equivalent to Docker's port-publishing.
is there a better/simpler way to do this?
Not right now.
I am curious what services you are running that bind to the wrong interface. Most computers have more than one network interface these days, so I am surprised that there is commonly used software out there that assumes otherwise.
I see. That would give containers access to the outside world but not vice versa, right? i.e. you are not doing anything equivalent to Docker's port-publishing.
Yes. For inbound traffic we use an haproxy that also has the docker + weave interfaces.
I am curious what services you are running that bind to the wrong interface.
It's not our services. These are services our customers are running and we've had support requests where things didn't work as expected since their services bound to the wrong interface. The problem is that it works for them on their local machine using docker-compose. And then they are suprised that it doesn't work in production.
We want weave to work without Docker networking. #1301 is one issue we need to address for that. The other is external access, i.e. we presently rely on Docker networking to provide containers access to the outside world and, for the outside world to access containers.
weave expose
and the documented service export/import features get around that to a degree, but they are rather cumbersome in comparison.