Open dimaspivak opened 7 years ago
Thanks for your report. I suspect your diagnosis is correct: that UDP port 4789 is not -- by default -- forwarded into the VM.
We recently experimented adding support for exposing the prometheus stats endpoint-- perhaps something similar could work for your case. To see how this works, run a root shell in the VM using something like
docker run --rm --net=host --pid=host --privileged -it justincormack/debian nsenter -m -t 1 sh
and then read /etc/init.d/docker
, in particular:
# On desktop forward metrics to host if enabled in daemon.json
case "$(mobyplatform)" in
windows|mac)
METRICS_ADDR=$(cat /etc/docker/daemon.json | jq -e -r '."metrics-addr"')
if [ $? -eq 0 ]
then
METRICS_IP="$(echo "$METRICS_ADDR" | cut -d: -f1)"
METRICS_PORT="$(echo "$METRICS_ADDR" | cut -d: -f2)"
/usr/bin/slirp-proxy -proto tcp -host-ip 0.0.0.0 -host-port "$METRICS_PORT" -container-ip "$METRICS_IP" -container-po
rt "$METRICS_PORT" -i -no-local-ip &
fi
;;
esac
To expose UDP port 4789 you could run something like
/usr/bin/slirp-proxy -proto udp -host-ip 0.0.0.0 -host-port 4789 -container-ip <IP within VM> -container-port 4789 -i -no-local-ip
(this is the user space proxy used to expose host ports to containers). I assume -container-ip <IP within VM>
would have to refer the interface where the overlay network is configured. Perhaps -container-ip 192.168.65.1
?
Could you try running this proxy and let me know if it helps? If we figure out a nice way to do it then we could consider a configuration option of some kind.
@dimaspivak : I just wanted to check in to see if you've had a chance to try out the proxy suggested above. If you could confirm it works, we can move forward with integrating it more fully into Docker for Mac.
Hey @avsm, sorry for being MIA; been a bit swamped with work. This looks really promising and I'll be happy to give it a shot in a couple days when life slows down a bit :).
@dimaspivak no rush at all from our end; we can hold this issue open as long as you need. Good luck with the work!
@dimaspivak If you get a chance, I would be interested in your results.
Hi folks, just want to report that I'm having the same issue. My swarm manager is an Ubuntu machine and I'm running a service on a Docker for Mac node.
I tried running the proxy as suggested which yielded the following output:
/ # /usr/bin/slirp-proxy -proto udp -host-ip 0.0.0.0 -host-port 4789 -container-ip 192.168.65.2 -
container-port 4789 -i -no-local-ip&
/ # 2017/04/26 02:30:46 exposePort udp:0.0.0.0:4789:udp:192.168.65.2:4789
2017/04/26 02:30:46 Proxy running
I then joined a swarm with an overlay network from the Docker for Mac instance and started a service on the Docker for Mac instance with a published port. The accessibility of the container the Docker for Mac instance is as follows:
-A container running on the management host can successfully ping the Docker for Mac container -The Docker for Mac container can be accessed on its published port from the Mac (as localhost) -The Docker for Mac container cannot ping OR connect to the container running on the management host -The Docker for Mac container cannot be accessed on its published port via the management host (the ingress network doesn't work)
Let me know if you need more details or if I haven't run the experiment as intended, I'd really like to resolve this.
@avsm and @djs55 I can confirm the same results as @bsb20 and @dimaspivak . Please let me know if I can do anything to help move this forward.
is there anyone who resolved it?
I also have a problem like this, I use swarm mode with a defined network. my linux server runs manager and my Mac runs a worker, containers in mac can run swarm service correctly, but the container cannot communicate with others in linux server my docker on mac version is 18.02.0-ce-rc2
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
is there a resolution or workaround to this? know this issue been lying around over year now :)
Hello there? Same problem here!
Mac:
MBP-Aleksandr:~ nyonor$ docker -v
Docker version 18.09.1, build 4c52b90
MBP-Aleksandr:~ nyonor$
Ubuntu server 18.04.1 LTS
nyonor@ubuntu-megas-server:~$ docker -v
Docker version 18.09.1, build 4c52b90
/remove-lifecycle stale
/lifecycle frozen
Are there any plans to address this issue?
Expected behavior
Using an external KV store and a correct set of Docker for Mac advanced daemon options, it should be possible to run a container on Docker for Mac, connect it to a pre-existing Docker for Linux overlay network, and interact with other containers on the network.
Actual behavior
While DNS resolution works, none of the packets from the Docker for Mac container are received.
Information
Steps to reproduce the behavior
cluster-store
to an external Consul KV instance 3) setcluster-advertise
toeth0:2375
. I did the same steps on a second Linux machine, as well, purely to test out that the overlay network I'd eventually create is actually working. I also run a container callednode-1
and attach it to this overlay network.docker network create --driver overlay mynetwork
). I used the aforementioned second Linux machine as a check of the fact that containers spanning these two Linux machines can communicate correctly.-H
in the daemon options), use thesocat
workaround suggested here. With this in place, I can verify that_ping
works and reaches my Docker for Mac instance over the internet using my external IP address. I use this same external IP address when settingcluster-advertise
and the samecluster-store
Consul address as before.docker run --network cluster alpine:latest ping node-1
. While the hostname gets resolved correctly, no transmitted packets get received.I assume this is related to the fact that the 4789 UDP port for VXLAN isn't being forwarded from the host to the HyperKit VM. Is there any hope of a configuration that might make it possible for this to work? My company's specific use case has us wanting to run locally-built tests out of a container on a dev's laptop against the overlay network we're hosting in the cloud. If not deserving of its own configuration option, is there any other way I can achieve this?