docker / for-win

Bug reports for Docker Desktop for Windows
https://www.docker.com/products/docker#/windows
1.85k stars 287 forks source link

How to access containers by internal IPs 172.x.x.x #221

Closed wclr closed 7 years ago

wclr commented 7 years ago

How to access containers by internal IP 172.x.x.x from dev machine (with docker for windows installed)? So by default you can not connect to containers.

I found out that it can be achived by adding route manually (you actually need to add routes for each sub-netwrok, I usually do for 17-25):

route /P add 172.17.0.0 MASK 255.255.0.0 10.0.75
route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75
route /P add 172.19.0.0 MASK 255.255.0.0 10.0.75
...

Is is a valid method? Shouldn't it be made possible by default?

rn commented 7 years ago

you should be able to access the containers via localhost. Does that not work?

wclr commented 7 years ago

you should be able to access the containers via localhost.

What do you mean by that? You mean port mapping or what?

I want to reach their IPs, in my case then I use dnsdock to have DNS discovery for containers and access them by pretty dns names (without need of port mapping)

kallie-b commented 7 years ago

So you should be able to access containers from your container host using the container IP. You can use docker inspect <container ID> to get your container's IP address.

Does that answer your question?

wclr commented 7 years ago

@kallie-b ok what should I do after I got IP? I want to ping it by IP. But it won't work from dev machine. I'm asking how to do this.

kallie-b commented 7 years ago

Right, okay--yes, I'm happy to help.

So, can you provide the results that you get when you run docker inspect <container ID>? And let me know which IP address you're trying to use to ping the container--I want to confirm you're using the container's internal IP.

Also, I'm assuming your dev machine is the container host--is that correct? You're not running a VM on your dev machine as the container host, or anything like that?

Could you describe the steps you are taking more specifically (including where each step is executed--in the container, on the container host, or on another, external, host)? Wherever possible, also include any error messages.

wclr commented 7 years ago

I'm assuming your dev machine is the container host

My machine is not a container host, it is a windows 10 dev machine with installed docker for windows, it has only 10.0.75.x interface related to docker, no 172.x.x.x interface to be able to communicate with 172.x.x.x addresses directly. Host machine is linux that runs on Hyper-V, called MobyLinuxVM.

As I've mentioned, this will solve the issue:

route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2

If I was using linux (I never used with docker), but I asume my dev machine would be also a docker host, I could access docker internal network 172.x.x.x. directly without any specific manually added routes to route table.

What I want is a comment about this issue from docker team, and if they are going to make integration between windows 10 dev machine and docker internal networks deeper.

JMesser81 commented 7 years ago

ping @friism

wclr commented 7 years ago

There seem to be a problem with docker network when such route:

route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2

is added

Log is full of events, and growing very fast (log.txt - up to 1GB for a few hours):

15:48:00.469][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54882-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.471][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54883-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.473][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54884-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.475][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54885-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.476][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54886-172.26.234.194:51029: creating UDP NAT rule

Here is a log with this case: https://gist.github.com/whitecolor/4940a8566f2b0211f6864cc11adb69be

Which also effects on the host, CPU usage is going up to 100% some time later image

Can you comment on this as well? What is causing those events in the log?

dgageot commented 7 years ago

@whitecolor I'm not sure I understand what you are trying to achieve. Is it a Windows container or a Linux container you are trying to connect to?

wclr commented 7 years ago

@dgageot I need to connect to running containers from Windows dev machine where docker is installed. This can be currently done by adding appropriate routes to routing table via 10.0.75.2 (this IP of docker linux host running on HyperV I believe).

wclr commented 7 years ago

Did I still failed to explain my request in OP?

1) I'm running docker-for-windows on windows machine. 2) Containers that are run on this platform has internal IPs like 172.18.x.x 3) I want to reach (be able to ping) running containers directly from Windows machine (not using port mapping, I want to reach container's IP)

By default one can not just ping 172.18.x.x, but I found out the solution, add a route in route table:

route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75.2

And now ping 172.18.x.x worked.

But after I installed the lastest beta (build 9123) where network was changed a lot this method using routing table doesn't work anymore.

So can you elaborate on this. How one can reach (ping) 172.x... containers from windows dev machine? Why the method with routing tabled stopped to worked, and how it can be fixed?

Hronom commented 7 years ago

@whitecolor Thanks for workaround! Also faced with this problem under windows, under linux I don't have such a problem...

I need to have access to the containers directly by IP address of container, for example by 172.18.0.3

wclr commented 7 years ago

@Hronom I wonder how does it work on linux by default, which gateway routes 172. address to containers?

Hronom commented 7 years ago

@whitecolor On linux if I type in console ifconfig, I get next network interfaces:

br-bc76575bc879 Link encap:Ethernet  HWaddr *:*:*:*:*:*  
          inet addr:172.19.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

br-fccc8ee02778 Link encap:Ethernet  HWaddr *:*:*:*:*:*  
          inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:343481 errors:0 dropped:0 overruns:0 frame:0
          TX packets:448723 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:93440945 (93.4 MB)  TX bytes:169198433 (169.1 MB)

docker0   Link encap:Ethernet  HWaddr *:*:*:*:*:*  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:66359 errors:0 dropped:0 overruns:0 frame:0
          TX packets:77517 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3569440 (3.5 MB)  TX bytes:203222893 (203.2 MB)

So there is a network interface br-fccc8ee02778 with IP 172.18.0.1 and mask 255.255.0.0

wclr commented 7 years ago

So probably on windows host there too such of interface with proper address should be added. But should there be interfaces for each 172.x... ?

JMesser81 commented 7 years ago

If your Windows containers are connecting to the default nat network on the container host, there should be a host vNIC named (e.g., vEthernet (nat)) with the NAT network's default gateway IP address assigned to this interface. Could you please verify this by running ipconfig /all

If that's true, then both the internal NAT network prefix and the external network prefix should be "on-link" from the container host's perspective and routing should happen automatically without creating static routes.

I've also created a PR (https://github.com/Microsoft/Virtualization-Documentation/pull/513) to help aid in container networking diagnostics as well as a clean-up script.

wclr commented 7 years ago

@dgageot can you please comment on this I believe it is quite important and basic networking issue.

wclr commented 7 years ago

@Hronom Can you confirm that the latest beta version doesn't work too? (without routes added to routing table) I just remember now that when I installed the latest I might not check it with clean routing table. (Just don't want to install and then rollback again.)

Hronom commented 7 years ago

@whitecolor sorry I'm don't have a chance to test this under beta version...

pachkovsky commented 7 years ago

I can confirm that route add method is not working with latest beta (1.13.0-rc4-beta34 (9562)). 172.17.0.1 is reachable, but none of the containers are.

I can also confirm that the method is working with 1.12.3 (8488) and 1.12.5 (9503).

wclr commented 7 years ago

@pachkovsky so and without route (out of the box) it too doesn't work I believe in the latest beta?

@rneugeba @dgageot Not sure why there is no reaction from the team?

pachkovsky commented 7 years ago

@whitecolor without the route it's not working neither in 1.2.x nor in 1.3

rn commented 7 years ago

@whitecolor could you please provide exact steps on how to reproduce on what you try to achieve, including the command line you use to start dnsdock. thanks

wclr commented 7 years ago

@rneugeba Well dnsdock actually has nothing to do with this issue. The problem with accessing containers by IP from windows machine.

rn commented 7 years ago

@whitecolor what's your use case? If you want to monitor a container there are a number of other options...

A bit of a background, while the access via 172.x.x.x (or similar) may work on a local Linux host there are a number of additional considerations:

Because of this this is unlikely to be supported in Docker for Windows (or Docker for Mac).

In general we recommend:

wclr commented 7 years ago

explicitly publish ports for services running in containers. This way they are exposed on localhost on the Windows host.

Use a container to connect to the another container, e.g. docker run -it --rm alpine ping 172.17.0.2 where 172.17.0.2 is the IP from a different container on the same network.

Thanks second that is a good advice for some (I believe rare) cases. But the much more important case is developer's convenience in accessing running services by fixed names not fixed/or not fixed ports. Having to deal with different port number for multiple services is very inflexible and clumsy way to go while development.

So the case, why we need it. Accessing multiple web services:

Another case example is accessing to DB servers by different management clients for example (MongoChef) for mongodb, and in general accessing any kind of service by special software installed on the developer's machine.

We can't really easily in the general case provide access to the Linux networking from the Windows host because:

So, generally if there wouldn't be an access by default maybe this is even more correct, to avoid conflict, BUT there should be some way to make it available for those who need it, adding manual route (route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2) is ok solution.

But we need to understand what happens with MobyLinuxVM network configuration after I believe this was fixed and why method with adding route doesn't work any more.

I hope docker for win team would be able to help and resolve this.

rn commented 7 years ago

I had a closer look and i can confirm that the route add trick does work on 1.12.5-stable but does not work on the latest beta (1.13.0-rc4 beta34). Digging a bit deeper I noticed that with 1.13.0-rc4 The ICMP echo request packets arrive on the interface but get dropped by the FORWARD iptables chain. In 1.13.0-rc4 the default policy for the FORWARD chain is set to DROP while in 1.12.5 the policy is set to ACCEPT.

It appears that the this PR changed the policy in response to this issue.

Basically, with 1.12.x and previous you could access containers from other hosts by default while with 1.13 you can't anymore. Your route add trick basically changes allows you to access the the Linux VM from "another host", ie the host system, and that has been disabled in the upstream docker engine.

However, you can disable the iptables configuration by specifying "iptables": false in the Advanced Daemon configuration and I verified that when then adding a route on the host via route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2 I can ping a container from the host.

Note however, we don't really recommend this approach and would suggest to use the alternatives outlined in a previous comment.

wclr commented 7 years ago

However, you can disable the iptables configuration by specifying "iptables": false in the Advanced Daemon configuration

Could you elaborate how to achieve this on latest docker for windows? So we can install the latest version and apply the fix?

Note however, we don't really recommend this approach and would suggest to use the alternatives outlined in a previous comment.

Why? What is your recommendation arguments?

rn commented 7 years ago

Could you elaborate how to achieve this on latest docker for windows?

Whale systray rightclick -> Settings -> Daemon. Then toggle Basic to Advanced and you get an editor window where you can add "iptables": false to the daemon configuration.

Why? What is your recommendation arguments?

Accessing container by IP may make your development flow easier but for production it certainly will be better to expose ports and services properly.

wclr commented 7 years ago

Thanks. Will try with latest beta.

Accessing container by IP may make your development flow easier but for production it certainly will be better to expose ports and services properly.

Yeah, that it is obviously for enhancing development workflow with docker & docker-compose.

rn commented 7 years ago

@whitecolor could you confirm if the workaround suggested is working for you? Thanks

pachkovsky commented 7 years ago

Whale systray rightclick -> Settings -> Daemon. Then toggle Basic to Advanced and you get an editor window where you can add "iptables": false to the daemon configuration.

@rneugeba works for me in 1.13.0-rc4

wclr commented 7 years ago

@rneugeba After adding "iptables": false

https://docs.docker.com/engine/reference/commandline/dockerd/#linux-configuration-file There is multiple warning about disabling this option:

--iptables=false prevents the Docker daemon from adding iptables rules. If multiple daemons manage iptables rules, they may overwrite rules set by another daemon. Be aware that disabling this option requires you to manually add iptables rules to expose container ports. If you prevent Docker from adding iptables rules, Docker will also not add IP masquerading rules, even if you set --ip-masq to true. Without IP masquerading rules, Docker containers will not be able to connect to external hosts or the internet when using network other than default bridge.

Considering this need to have more instructions how to make docker function normally, or have another wokaround then "iptables": false. What is your suggestions?

@rneugeba is there a way to get inside MobyLinuxVM, can to find the way to do this, want to explore what is inside with routing tables.

rn commented 7 years ago

Oops, you are right. In that case, I'm afraid, there is no easy way to achieve what you want.

I will discuss next week with some other folks but, unfortunately, this might be closed as won't fix

wclr commented 7 years ago

@rneugeba

I will discuss next week with some other folks but, unfortunately, this might be closed as won't fix

Surely this should be not, at least there should be some maybe hacky way to achieve this, (I don't know: maybe change this ip-forward policy manually in MobyLinuxVM).

I think this is important feature, and if it won't be available it would be a very bad favor for dev folks. Just not many people use docker in development yet, because it is really hard to get smooth dev workflow (it is not impossible, but still quite hard), surely docker want to simplify things to make them more hard.

I'll repeat my question: is there a way to get inside MobyLinuxVM, can to find the way to do this, want to explore what is inside with routing tables?

rn commented 7 years ago

You can enter the root namespace via nsenter as with Linux containers. Something like

docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 /bin/sh

should do the trick.

rn commented 7 years ago

Discussed this with my colleagues and unfortunately there doesn't seem an easy/natural way to expose the container network on the host with Docker for Windows/Mac. Closing this as won-fix

wclr commented 7 years ago

@rneugeba that is really sad and disappointing to hear. Can you advice how what can be manually changed in the MobyLinuxVM to make this available again (as it was when it worked)? Maybe something related to iptables polices?

Don't you think that it is actually important feature for developer? Maybe to have some special settings, that those who need it could enable in dev mode?

rn commented 7 years ago

As mentioned above, this is a change which was introduced with Docker engine 1.13 with this PR due to this issue. This was a genuine issue and it's good that it got fixed.

There is no easy workaround I can see. You could disable iptables and then manage the rules yourself, but that seems very error prone and complex...

wclr commented 7 years ago

For those who need a solution:

or with single command:

Seems to work.

wclr commented 7 years ago

@rneugeba maybe create an issue in docker to add a parameter in for docker deamon config like ip-forward-policy?

Also there is such issue for implementing workaround: I wanted to add some onboot script that would change the policy after docker services started in the VM, but each time MobyLinuxVM restarts changes made to VM's file system of the machine disappear. Any ideas how this could be done?

TewWe commented 7 years ago

I would also bump this for a workaround. As @whitecolor suggests, it shouldn't be a default, but some option that would allow to turn on something like a "developer mode" with the ip-forward-policy setting, so for non-prod purposes it could still help a lot.

How we use docker for development, so why we would need such feature? The environment we are developing to is a very data- and inter-dependent service network. Because of this we developed different docker container pools which are generated automatically, and represent a subset of the service network and is autonomous enough for a developer to test their flow. So we are generating these containers continously and developers can use multiple of them to simulate part of the real environment for data-dependent testing. As there can be dozens of services running in these pools and multiple developer machines should use these, exposing ports would be error prone, and would cause collisions if multiple pools are used on multiple machines. Not to mention the administrative part, where one has to know which port belongs to which service on a specific pool.

So we are really benefiting from the current possibility of directly accsessing the containers, and would love to see this in the future as well!

Thanks.

wclr commented 7 years ago

I've created a proposal in docker repo about allowing to override ip forward policy https://github.com/docker/docker/issues/30093

@TewWe

This is still possible to use workaround in the latest version I posted in previous post.

rn commented 7 years ago

Note, that this only works because we have two network interfaces in Windows and one is accessible from the host (the 10.0.75.1 one). This is only used for SMB filesharing and if we find a different path for it, it may go away. Providing a more generic way of accessing the internal docker network, which works across Windows and Mac is non-trivial. We had discussed some options internally, but none of them are really palatable.

wclr commented 7 years ago

Note, that this only works because we have two network interfaces in Windows and one is accessible from the host (the 10.0.75.1 one). This is only used for SMB filesharing and if we find a different path for it, it may go away.

Currently route is added to 10.0.75.2, which this IP for?

I believe it still should be possible to have access from windows/mac PC to docker internal network. It is all just IP which provides inter-network routing. Linux native installation provides access to docker network, why windows should not provide it? Besides windows and mac installation mostly used for development, I think it is really important feature this it should be resolved by docker team eventually.

ghost commented 7 years ago

I have an SMB share on the host to share local folders with docker and now I want to run a couple of containers with SMB shares as well, how exactly would you redirect the ports to the host?? its impossible! the only way is to access them directly, and you removed that option!

wclr commented 7 years ago

@g8tguy there is a workaround currenlty I just hope that docker team would be smart/grateful enough not to remove this ability at all, because it would really make things bad.

mverrilli commented 7 years ago

One comment regarding the workaround... the netmask should probably be tighter since 172 contains both public and private addresses. Shouldn't it be 172.16.0.0 and 255.240.0.0?

wclr commented 7 years ago

@mverrilli it may be, I just had some issues with 255.0.0.0 version of mask in older docker versions (docker log was flooded with some messages), if it works for you than ok.

mverrilli commented 7 years ago

@whitecolor Well, if you include all 172, then you are going to have problems accessing some sites. For example, here are some Google ip blocks. If you happen to get routed through these networks without using the netmask I provided, you won't be able to route to them.

172.253.0.0/16 172.217.30.0/24 172.217.28.0/24 172.217.24.0/24 172.217.16.0/24 172.217.0.0/24 172.217.0.0/16 172.102.9.0/24 172.102.8.0/24 172.102.8.0/21 172.102.14.0/23 172.102.12.0/23 172.102.11.0/24 172.102.10.0/24