Open KartoffelToby opened 7 years ago
Thanks for the feature request. We know this is a useful capability and we have discussed use cases at length.
I'm curious: what prevents you from running multiple database containers and mapping each to a different port and then using a complete sockaddr (address + port rather than address alone) to refer to each?
It's for a dev environment, so the Developer access databases via domains, it should totaly dynamic so i cant do a static configuration by mapping ports
i resolv the dev domains for the databases (mysql.server1.local to 172.17.0.X)... On Linux it works like a charm
But on Docker for Mac i have the issue that the 172.17.0.X IPs isn't accessible from Host because of the VM
This pretty much relates to #171. Our specific use-case is the same.
We have setup an /etc/resolver
file, to use Docker's or Weave's DNS to resolve to local container IPs, and with that we can expect it to 'just work' by using default ports.
Previously, all we needed was to route all traffic into our VM on a specific IP range, but now it's not possible to do so, so we either have to bind to specific ports, for specific services, or we need a clever load balancer, or we need to do manual service discovery. Either case, the previous method of it just working, and us not having to worry about any of the above, seemed like the better solution to us, at least.
+1, interested in the fix
+1, interested in the fix
Any plans to resolve this issue in near future?
Hi we are considering how this could be implemented, potentially via a VPN from the Mac to the Linux VM, but it is not being worked on at present. It is probably possible to implement a version by running a container with --net=host
running a VPN endpoint too, if someone wants to give it a go.
@justincormack thanks for fast response. For me and I suppose the rest of people, docker for mac is only for development purpose. So I do not think anyone will bother if this will be easy to use, like docker on linux.
If you're considering VPN I assume there's no way to create network connectivity between Mac and xhyve huh? I wonder why. I think VPN is a bit of an overkill ... really should be a simple developer experience pinging the docker ip's like they would on Linux...
Was the approach mentioned in this thread on the forums considered?
I don't know enough to have an opinion myself, but from reading the first 3 messages, it sounded like there was a fairly simple way to getting a workable solution. Thanks.
@4shome
Maby i'm wrong but the only Problem is that the xhyve vm inside Docker for Mac hasn't no Network Adapter. Or routable IP like boot2docker (192.168.100.99 like that)
With boot2docker and a route command its possible to Route all the Container ips to the vm Network.
We need this for xhyve.
Whats going on with this tuntap Plugin?
Here is a abstract scenario that isn't currently working with docker for mac
Any update on it ?
🆙 2017
If this were possible, the use case of connecting Docker for Mac containers to overlay networks running across Linux hosts would become possible. This would be so cool for testing (e.g. imagine connecting a locally-developed container with tests to a cloud-based application). Pretty, please? :)
As mentioned in the comment above, the Support tap interface for direct container access (incl. multi-host) thread on the Docker forums appears to offer a solution from @michaelhenkel, which I hope he will not mind me copying in here in the hope it moves this issue forward a little:
starting the daemon manually with:
sudo /Applications/Docker.app/Contents/MacOS/com.docker.hyperkit -A -m 4G -c 4 -u -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-tap,tap1 -s 3,virtio-blk,file:///Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2,format=qcow -s 4,virtio-9p,path=/Users/birdman/Library/Containers/com.docker.docker/Data/s40,tag=db -s 5,virtio-rnd -s 6,virtio-9p,path=/Users/birdman/Library/Containers/com.docker.docker/Data/s51,tag=port -s 7,virtio-sock,guest_cid=3,path=/Users/birdman/Library/Containers/com.docker.docker/Data,guest_forwards=2376 -l com1,autopty=/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty,log=/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring -f kexec,/Applications/Docker.app/Contents/Resources/moby/vmlinuz64,/Applications/Docker.app/Contents/Resources/moby/initrd.img,earlyprintk=serial console=ttyS0 com.docker.driverDir="/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux", com.docker.database="com.docker.driver.amd64-linux" ntp=gateway -F /Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/hypervisor.pid
allows me to assign a routable IP address to moby:
moby:\~# ip addr add 10.1.0.2/24 dev eth0
moby:~# ping 10.1.0.1
PING 10.1.0.1 (10.1.0.1): 56 data bytes 64 bytes from 10.1.0.1: seq=0 ttl=64 time=0.349 ms
and to prove the point:
docker -H tcp://10.1.0.2:2375 network create -d macvlan --subnet 10.1.0.0/24 --ip-range 10.1.0.128/25 --gateway 10.1.0.1 -o parent=eth0 net2
docker -H tcp://10.1.0.2:2375 run -itd --name alp2 --net net2 alpine /bin/sh 93b85398e5356541d8f843c1ce19171cc3c56d217e889c7132dc0c539932c612
docker -H tcp://10.1.0.2:2375 exec -it alp2 ip addr sh
1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: sit0@NONE: mtu 1480 qdisc noop state DOWN qlen 1 link/sit 0.0.0.0 brd 0.0.0.0 3: ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN qlen 1 link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 4: ip6gre0@NONE: mtu 1448 qdisc noop state DOWN qlen 1 link/[823] 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 14: eth0@ip6gre0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 02:42:0a:01:00:80 brd ff:ff:ff:ff:ff:ff inet 10.1.0.128/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:aff:fe01:80/64 scope link valid_lft forever preferred_lft forever
ping 10.1.0.128
PING 10.1.0.128 (10.1.0.128): 56 data bytes 64 bytes from 10.1.0.128: icmp_seq=0 ttl=64 time=0.408 ms 64 bytes from 10.1.0.128: icmp_seq=1 ttl=64 time=0.204 ms
os x can ping the container...
I've tried to run docker daemon on sierra and have got
Artems-MacBook-Pro:abcp artemkaint$ sudo /Applications/Docker.app/Contents/MacOS/com.docker.hyperkit -A -m 4G -c 4 -u -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-tap,tap1 -s 3,virtio-blk,file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2,format=qcow -s 4,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db -s 5,virtio-rnd -s 6,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port -s 7,virtio-sock,guest_cid=3,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data,guest_forwards=2376 -l com1,autopty=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty,log=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring -f kexec,/Applications/Docker.app/Contents/Resources/moby/vmlinuz64,/Applications/Docker.app/Contents/Resources/moby/initrd.img,earlyprintk=serial console=ttyS0 com.docker.driverDir="/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux", com.docker.database="com.docker.driver.amd64-linux" ntp=gateway -F /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/hypervisor.pid
Password:
open of tap device /dev/tap1 failed
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None
com.docker.hyperkit: [INFO] image has 0 free sectors and 244203 used sectors
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None returning 0
mirage_block_stat
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port
vsock init 7:0 = /Users/artemkaint/Library/Containers/com.docker.docker/Data, guest_cid = 00000003
linkname /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
COM1 connected to /dev/ttys004
COM1 linked to /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
rdmsr to register 0x64e on vcpu 0
rdmsr to register 0x34 on vcpu 0
and script have been paused for a long time
It appears it is possible for a standard xhyve install is able to allow access to the xhyve VM from the outside as indicated by the blog post at http://mifo.sk/post/xhyve-for-development
I think this would be the first step to properly supporting --net=host in Docker for Mac
Can someone from the Docker team investigate this?
@justincormack it's already 8 months since this issue was created and yet there is no response from docker staff. Don't know what is blocking to get it working, since hyperkit can create bridge intefrace to comunicate between host and vm: https://github.com/docker/hyperkit/issues/45 whats more, it was posted month before this issue was created.
Most of us just want to know, what is blocking this issue to get fixed ?
This has been blocking us making progress on moving local development to Docker as a team for a few months now (we're about 50/50 Linux/OSX, and the half on Linux were getting bored waiting!) so I borrowed a colleague's Mac and bashed (pun intended) this together:
https://github.com/mal/docker-for-mac-host-bridge
It's not perfect, it's limited to a single bridge network for now (but with scope for expansion), and it doesn't handle Docker daemon restarts just yet (see the warning in the Readme), but it's at least usable. Hopefully some of you will find it useful too, until this issue is resolved. 😄
Issue with 17.04
reported below has been fixed as of v1.1.0
🙂
Please report any other problems on the repo itself to avoid highjacking this thread 👍
@mal thnx, it works perfectly on Stable!!!! But suddenly crashed on morning update on 17.04 (Beta) I hope the author will fix script in short time)
There's an experimental new networking mode in the Docker for Mac master
branch which might help. The master
builds of Docker for Mac are available from https://download-stage.docker.com/mac/master/Docker.dmg. These are intended only for testing and not for production.
The latest version is
Version 17.05.0-ce-mac9 (17691)
Channel: master
672b42570d
I installed it and then enabled the new mode by:
$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at e87cf4c Writing mobyconfig configuration to master
$ echo -n both > com.docker.driver.amd64-linux/network
$ git add com.docker.driver.amd64-linux/network
$ git commit -s -m 'Use the experimental networking mode'
[master c4653e5] Use the experimental networking mode
1 file changed, 1 insertion(+), 1 deletion(-)
-- at this point the VM will restart. When the VM is back online it will have 2 interfaces:
eth0
: this is the same as beforeeth1
: this is bridged to the host via the vmnet.framework
For example if you look inside the VM with docker run --rm --net=host --pid=host --privileged -it justincormack/nsenter1 /bin/sh
you should see:
eth1 Link encap:Ethernet HWaddr 86:36:01:A5:5F:7C
inet addr:192.168.64.154 Bcast:192.168.64.255 Mask:255.255.255.0
inet6 addr: fe80::ad71:ade1:8072:6a6d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1702 (1.6 KiB) TX bytes:1546 (1.5 KiB)
and on the host you should see:
bridge100: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=3<RXCSUM,TXCSUM>
ether ae:bc:32:bc:73:64
inet 192.168.64.1 netmask 0xffffff00 broadcast 192.168.64.255
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en4 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 15 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
Let me know if this is useful at all.
@djs55 What this new networking mode does? I didn't get it, sorry :(
How does it solve this: "Access the Containers from Host like on Linux"
@inancgumus the IP address on the new interface eth1
should be reachable from the host. The IP is dynamic, but should begin with 192.168.64.x
.
@MagnusS So, I can use eth1 IP address to reach the containers through their exposed ports even they're being configured in network_mode: host
?
@inancgumus Note that the new (experimental) mode is a low-level building block intended to allow experimentation -- it's not a completed feature or ready for production use. It's inspired by the workaround here: https://github.com/mal/docker-for-mac-host-bridge . Rather than use the tuntap
kernel module and wrap the com.docker.hyperkit
binary (which is fragile) this uses the Apple vmnet.framework
to create the interfaces and the bridge. To use it end-to-end you'll probably either need to edit your routing tables or use network_mode:host
, bind ports inside containers and then access them from 192.168.64.x
(depending on the exact x
allocated to the VM)
~/Library/Containers/com.docker.docker/Data/database$ git show HEAD:com.docker.driver.amd64-linux/network
both
~/Library/Containers/com.docker.docker/Data/database$ docker run --rm --net=host --pid=host --privileged -i justincormack/nsenter1 /sbin/ip a | grep ': eth'
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
~/Library/Containers/com.docker.docker/Data/database$ docker version
Client:
Version: 17.06.0-ce-rc1
API version: 1.30
Go version: go1.8.1
Git commit: 7f8486a
Built: Wed May 31 02:56:01 2017
OS/Arch: darwin/amd64
Server:
Version: 17.06.0-ce-rc1
API version: 1.30 (minimum version 1.12)
Go version: go1.8.1
Git commit: 7f8486a
Built: Wed May 31 03:00:14 2017
OS/Arch: linux/amd64
Experimental: true
Is this option still available?
@dmage the mode is available in 17.06.0-ce-rc1
but the configuration key was changed to network-2
-- sorry for the confusion.
To turn it on:
cd ~/Library/Containers/com.docker.docker/Data/database/
git reset --hard
echo -n both > com.docker.driver.amd64-linux/network-2
git add com.docker.driver.amd64-linux/network-2
git commit -s -m 'Use the experimental networking mode'
The VM nolonger restarts automatically -- it's necessary to manually restart the app.
Let me know if you find this experimental mode useful.
It quite useful, but I'd like to see a better way to determine IP address than
docker run --rm --net=host busybox sh -c 'ip -o -4 addr show eth1 | awk -F"[ /]+" "{print\$4}"'
Hello guys,
I am currently running an Ubuntu VM through VMware with Bridge networking, using Samba shares and SSH to work with Docker on my Mac as intended.
I have multiple Docker containers running for one project at a time -- MySQL running on one container, NGINX on another and so on.
Does this new beta version of Docker for Mac allow for multiple containers to be connected directly to OSX and in-between them, like in Linux?
Hi all,
I partly got this to work under Version 17.06.0-rc2-ce-mac14 (18280). I can access one VM via 192.168.64.3 - but all VMs seem to report the same IP.
E.g. I have portainer running at port 9000. I can access it via localhost:9000 and via 192.168.64.3:9000
Running docker run --rm --net=host busybox sh -c 'ip -o -4 addr show eth1 | awk -F"[ /]+" "{print\$4}"'
I get also 192.168.64.3
Secondly: Is there a way to route the traffic from the bridge IP 172.17.0.x to the 192.168.64.x network?
+1 for an easy and proper solution for this. I have the same problem, host access from Docker works fine in Linux via 172.17.0.1
, and is a nightmare on Mac.
I really cannot believe this has not been solved yet.
I have Docker for Mac 17.06.0-ce-mac19. I run my container with
docker run --env MYSQL_ALLOW_EMPTY_PASSWORD=yes --name local-mysql --rm mysql:5.7
docker network inspect bridge
returns
[
{
"Name": "bridge",
"Id": "7b9efe7799e0aa824be77022763e0996c35e132bdcd39eb20b7c3a5b8066a4d1",
"Created": "2017-07-19T11:18:36.348056442Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"d2721db10fc66b329fdba6ad1e60a82dc6615adc8d39a83e1b59e5359f4cf602": {
"Name": "local-mysql",
"EndpointID": "2823195643fbad0b44c925f3d55eb3b039adf4f9da498993fcb7ca185818d947",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
But then ping 172.17.0.2
times out.
For those who haven't yet got a working solution for this, I have created a shim workaround for this issue in this project. It behaves the exact same as the hvint0
interface on Windows
https://github.com/AlmirKadric-Published/docker-tuntap-osx
More details can be found here: https://forums.docker.com/t/support-tap-interface-for-direct-container-access-incl-multi-host/17835/21
Honestly the fact this issue hasn't been fixed is ridiculous so I plan to make some waves with this package. Request to update docs here: https://github.com/docker/docker.github.io/issues/3922
Please upvote and comment your support so that this issue gets the attention it deserves
@AlmirKadric I believe the problem here is that they want to avoid using sudo.
@aPoCoMiLogin my suggestion is to leave that decision to the user With my implementation, all the Docker for Mac team would have to do (for now, until they have a more comprehensive customized solution) is allow the user to be able to configure the hyperkit arguments used to start the Host VM instance (or at the very least be able to configure the network interfaces). By doing this the user can opt-in to use a solution like TunTap which may need sudo access.
I understand that they would like to implement something perfect, but in the mean time we have no real solution or work around (apart from hacks such as my project above). IMHO users should not get stuck whilst they try and figure things out (it's already been more than a year).
@AlmirKadric I have exactly same thoughts on that. First of all, this is not an calendar app, that shouldn't use root privileges. I know that you should avoid root, but this can be done leter without blocking anyone.
UPDATE: Ignore this there is a solution by @ericdwhite further down in the comments.
Using the network-2
option from above. I believe it should be possible to set the host VM setup forward the packets to a docker created network from OSX.
e.g. on OSX
docker network create --subnet 10.11.12.0/24 ww_dev_nw --gateway 10.11.12.1
sudo route add 10.11.12.0/24 192.168.64.3
Where 192.168.64.3
can be found using: run --rm --net=host busybox sh -c 'ip -o -4 addr show eth2 | awk -F"[ /]+" "{print\$4}"'
HOWEVER this doesn't work. The packets get into the host VM but not through to the container network dev_nw
. I tried different iptables commands in the host VM accessed via: docker run --rm --net=host --pid=host --privileged -i justincormack/nsenter1
. (See later for what I think may work)
Could someone who is more familiar with Docker host networking iptables
setup give some hints please! @djs55 :)
Enabling eth2:
cd ~/Library/Containers/com.docker.docker/Data/database/
git reset --hard
echo -n both > com.docker.driver.amd64-linux/network-2
git add com.docker.driver.amd64-linux/network-2
git commit -s -m 'Use the experimental networking mode'
Notes:
Version 17.06.0-ce-mac19 (18663)
Channel: stable
c98c1c25e0
iptables logging:
iptables -N LOGGING
iptables -A LOGGING -m limit --limit 200/min -j LOG --log-prefix "iptables-debug: "
iptables -A LOGGING -j RETURN
iptables -I DOCKER-USER -j LOGGING
Towards a solution:
iptables -F DOCKER-USER
iptables -A DOCKER-USER -j LOGGING
iptables -A DOCKER-USER -i eth2 -o br-ba8291f85d11 -j ACCEPT
iptables -A DOCKER-USER -i br-ba8291f85d11 -o eth2 -j ACCEPT
iptables -A DOCKER-USER -j RETURN
Testing:
OSX terminal 1$ docker run -it --name alpine --net=ww_dev_nw --ip 10.11.12.99 alpine:latest /bin/bash
OSX terminal 2$ ping 10.11.12.99
HOST$ tail -f /var/log/messages
INBOUND
Jul 30 17:07:55 moby kernel: iptables-debug: IN=eth2 OUT=br-ba8291f85d11 MAC=52:93:65:e0:89:ff:ae:de:48:00:33:64:08:00 SRC=192.168.64.1 DST=10.11.12.99 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=58456 PROTO=ICMP TYPE=8 CODE=0 ID=51204 SEQ=8
OUTBOUND
Jul 30 17:07:55 moby kernel: iptables-debug: IN=br-ba8291f85d11 OUT=eth2 PHYSIN=veth5e3bfe0 MAC=02:42:c8:a5:af:78:02:42:0a:0b:0c:63:08:00 SRC=10.11.12.99 DST=192.168.64.1 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=9596 PROTO=ICMP TYPE=0 CODE=0 ID=51204 SEQ=8
NOTE: What does work is poking holes into the firewall to push traffic to specific hosts and ports in the isolated network. E.g. 10.11.12.99$ nc -l -p 9000
and then from OSX: OSX$ netcat 192.168.64.3 9000
iptables -t nat -I PREROUTING -p tcp -i eth2 -d 192.168.64.3 --dport 9000 -j DNAT --to-dest 10.11.12.99
@ericdwhite ah sorry, i have another helper which sets up the iptables and routes The scope of the above helper is to just provide a bridge interface behaviour similar to windows but on macOS. The routing and iptables setup is done by another helper which I use on both windows and mac. (I plan to bundle my other helper at some point and open source it)
My docker network is created by my docker-compose file (it's just a private IP segment dedicated to that specific cluster)
Then I finish it up with the following route configuration
sudo route add -net <NETWORK IP RANGE> -netmask <IP MASK> <IP OF TAP INTERFACE LOCALLY>
docker run --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i iptables -A FORWARD -i <INTERFACENAME IN DOCKER HOST> -j ACCEPT
(this is run from a node.js helper which also initialises any other databases or servers I need)
EDIT: i just realised you weren't talking about my solution, but the principle should be the same
First if anyone has suggestions or comments about this, please provide feedback and I will update this comment. It took me hours using tcpdump
to finally figure out why I couldn't make this work initially. And I would hope others wouldn't need to go through that.
This is a solution that worked for me without having to use the tuntap interface. It does require the experimental network by @djs55.
This uses the experimental network feature.
osx $ cd ~/Library/Containers/com.docker.docker/Data/database/
osx $ git reset --hard
osx $ echo -n both > com.docker.driver.amd64-linux/network-2
osx $ git add com.docker.driver.amd64-linux/network-2
osx $ git commit -s -m 'Use the experimental networking mode'
There should now be a network bridge: bridge100
. This can be seen with
osx $ ifconfig -v bridge100
Create a custom network (ONCE)
docker network create --subnet 10.11.12.0/24 dev_nw --gateway 10.11.12.1
Enter the Host and add Forwarding Rule (EACH TIME DOCKER IS RESTARTED)
At this point you need to find out the host interface that is in
the bridge bridge100
. Typically this is eth1
.
To enter the host:
osx $ docker run --rm --net=host --pid=host --privileged -it justincormack/nsenter1 /bin/sh
Then check the eth1
interface.
host # ifconfig eth1
Take note of the ipaddress of eth1
. It should have an IP
like 192.168.64.X
.
host # iptables -I FORWARD -i eth1 -j ACCEPT
For some reason when the enX
interface is bound to bridge100
it
is created with a hostfilter that drops responses from the container
network.
osx $ ifconfig bridge100 | grep member
This should return a interface member like en7
. This member needs
to be removed add readded
osx $ sudo ifconfig bridge100 deletem en7
osx $ sudo ifconfig bridge100 addm en7
Using the IP of eth1
, e.g. 192.168.64.X, add a route to the container network in OSX.
osx $ sudo route add 10.11.12.0/24 192.168.64.X
Start an alpine container on the container network dev_nw
.
osx $ docker run -i -t --net=dev_nw --ip 10.11.12.99 --name alpine --rm alpine:latest /bin/sh
ping the container from OSX.
osx $ ping 10.11.12.99
This is the hardware filter that is removed by re-adding the interface to the bridge. @djs55 It would simplify this solution greatly if two changes were made.
hostfilter
was not locked to the host eth1
interfaceeth1
within the host was automatically addedThen the end user would only require sudo
to add the route on their OSX machine.
$ ifconfig -v bridge100
bridge100: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 index 15
eflags=1808000<ROUTER4,CL2K,ECN_ENABLE>
options=3<RXCSUM,TXCSUM>
ether 8e:85:90:a0:a1:64
inet 192.168.64.1 netmask 0xffffff00 broadcast 192.168.64.255
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en7 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 14 priority 0 path cost 0
- hostfilter 1 hw: 3a:f7:9b:ec:2b:99 ip: 192.168.64.5
+ hostfilter 0 hw: 0:0:0:0:0:0 ip: 0.0.0.0
Address cache:
3a:f7:9b:ec:2b:99 Vlan1 en7 1182 flags=0<>
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
state availability: 0 (true)
desc: com.apple.NetworkSharing
qosmarking enabled: no mode: none
@ericdwhite looks nice, I see the route setup worked for you I'll give this a go and see how it performs against the tuntap solution If this works well, all docker will need to do now is have an option to enable the experimental networking feature.
If it works well, I'll create an installer script and put it on github as a replacement to my tuntap one.
Nice work 👍 (And to all the others who help contribute to this above in earlier posts)
Here is something as a work around to this problem. From reading through most of the comments online I found that this is currently a missing feature that is available in Linux. The best solution I found to get around the problem was to run in bridged mode and change everything to docker.for.mac.localhost
Here is example docker that I was using to test out with curl commands
$ docker run -d --name elasticsearch -p 9200:9200 elasticsearch:1.4.5
$ curl http://localhost:9200
{
"status" : 200,
"name" : "Slither",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.4.5",
"build_hash" : "2aaf797f2a571dcb779a3b61180afe8390ab61f9",
"build_timestamp" : "2015-04-27T08:06:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
$ curl http://docker.for.mac.localhost:9200
curl: (6) Could not resolve host: docker.for.mac.localhost
The reason for this error above is because I am not inside my docker image and as a result that domain name is not defined. The good news is that you can get aroudn that problem by just adding it to your /etc/hosts
.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
127.0.0.1 docker.for.mac.localhost
255.255.255.255 broadcasthost
::1 localhost
Now the curl command should work with issue
$ curl http://docker.for.mac.localhost:9200
{
"status" : 200,
"name" : "Slither",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.4.5",
"build_hash" : "2aaf797f2a571dcb779a3b61180afe8390ab61f9",
"build_timestamp" : "2015-04-27T08:06:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
Here is how to setup the docker image to test it out for your self.
$ docker run --rm -it nginx /bin/bash
$ apt-get update
$ apt-get install curl
$ curl http://docker.for.mac.localhost:9200
Until the --network=host
starts working this seems to be my solution. Hope it can help here. Oh, also something to note I am not using Virtual Box on my mac. https://store.docker.com/editions/community/docker-ce-desktop-mac
Also as a piece of information, I am running docker Version 17.06.2-ce-mac27 (19124)
Isn't that just the standard way to talk to a container by port mapping to the host? Aliasing the loopback adapter as docker.for.mac.localhost doesn't change anything. People are looking to be able to communicate with the container from the host by ip address, i.e. without mapping a port on the host to a port on the container.
/me goes to sleep
Yeah, your right. It is not a solution to the issue this is about but I could not find a better place to post this solution. What I wanted to do was use docker run --network=host
and then just use localhost for everything but as it is that can't be done without doing something like what @ericdwhite did. Really his work is a better work around I just thought this could help if you wanted to get everything on the localhost without having to change IP tables and use experimental tools.
Its not a solution.
@flaccid Is there a better place I can post this? I was having troubles finding a good spot to give this feedback.
@newdark don't think so.. HOSTS(5)
is already well documented.
So is it still not resolved? That's unfortunate :(
guys, I manage that tool docker-dns that exposes all containers as a donain an you can conenct direct to it's domain (or IP) after creating a tunnel with sshutle. PS: I need to find another way of tunneling and remove sshuttle
This is a kind of a requst
Hello there,
i have a testing development scenario build with docker containers. on linux machines i can access them via the conatinaer IP (172.17.0.X) and intacting via the exposed Port.
But on Docker for Mac this isn't possible because i dont know the IP from the VM
with the Toolbox (Docker Machine) i can route 172.17.0.x to docker machine ip.. is there any way to do that with Docker for Mac?
I need this because i have multiple Database Containers each with the same Port... (so -p istn't the answer ;))