Open cm6051 opened 6 years ago
I have same problem, for me it's work if i use endpoint_mode: dnsrr
. But i can undertstand how i can expose services to some nodes. In example using weave dns-lookup some-container
, but if using weave status not DNS service and if i get ps -aux | grep weave
i see '--no-dns'. Maybe you have more real example where isung weave plugin and swarm services for exposing some services.
@cm6051 can you run curl 127.0.0.1:6783/status
on the node with Weave Net running and post the result here please?
I will comment that the service discovery piece is working, since ping tries to talk to a specific address. But then nothing comes back so I wonder if the network itself is set up ok.
Also the entire log would be helpful - looks like you just posted 8 seconds.
We had the same problem, seem to be happening on docker version 18.06
It works on 18.03.1-ce
Our test setup
version: '3.4'
services:
foo:
image: alpine
entrypoint: sleep 999
bar:
image: alpine
entrypoint: sleep 999
networks:
default:
driver: weaveworks/net-plugin:2.1.3
ipam:
driver: default
config:
- subnet: 10.32.1.0/24
To test
docker exec -it $(docker ps |grep foo| awk '{print $1}') ping bar
docker exec -it $(docker ps |grep bar| awk '{print $1}') ip address
The symptom was that the ip resolved for bar, and bar's ip address are different.
It failed even with a one-node swarm It also failed on weaveworks/net-plugin:2.4.0
Can also confirm that I get the same error under weaveworks/net-plugin:2.4.0 using docker-ce-18.06.0-ce.
No way to find the LB endpoint.
This is my setup for a custom weave network: docker network create --driver=weaveworks/net-plugin:2.4.0 --subnet 10.32.0.0/24 --gateway 10.32.0.1 --attachable weave
Could it be that launch.sh is not configured properly and is related to "providing access to the docker API (in containers)."?
#!/bin/sh
set -e
# Default if not supplied - same as weave net default
IPALLOC_RANGE=${IPALLOC_RANGE:-10.32.0.0/12}
HTTP_ADDR=${WEAVE_HTTP_ADDR:-127.0.0.1:6784}
STATUS_ADDR=${WEAVE_STATUS_ADDR:-0.0.0.0:6782}
HOST_ROOT=${HOST_ROOT:-/host}
LOG_LEVEL=${LOG_LEVEL:-info}
WEAVE_DIR="/host/var/lib/weave"
mkdir $WEAVE_DIR || true
echo "Starting launch.sh"
# Check if the IP range overlaps anything existing on the host
/usr/bin/weaveutil netcheck $IPALLOC_RANGE weave
# We need to get a list of Swarm nodes which might run the net-plugin:
# - In the case of missing restart.sentinel, we assume that net-plugin has started
# for the first time via the docker-plugin cmd. So it's safe to request docker.sock.
# - If restart.sentinel present, let weaver restore from it as docker.sock is not
# available to any plugin in general (https://github.com/moby/moby/issues/32815).
PEERS=
if [ ! -f "/restart.sentinel" ]; then
PEERS=$(/usr/bin/weaveutil swarm-manager-peers)
fi
router_bridge_opts() {
echo --datapath=datapath
[ -z "$WEAVE_MTU" ] || echo --mtu "$WEAVE_MTU"
[ -z "$WEAVE_NO_FASTDP" ] || echo --no-fastdp
}
multicast_opt() {
[ -z "$WEAVE_MULTICAST" ] || echo "--plugin-v2-multicast"
}
exec /home/weave/weaver $EXTRA_ARGS --port=6783 $(router_bridge_opts) \
--host-root=/host \
--proc-path=/host/proc \
--http-addr=$HTTP_ADDR --status-addr=$STATUS_ADDR \
--no-dns \
--ipalloc-range=$IPALLOC_RANGE \
--nickname "$(hostname)" \
--log-level=$LOG_LEVEL \
--db-prefix="$WEAVE_DIR/weave" \
--plugin-v2 \
$(multicast_opt) \
--plugin-mesh-socket='' \
--docker-api='' \
$PEERS
This doesn't seem to be related to weave. I just tried the described by @mvtorres here and it behaves the same without weave plugin even installed on my system with docker 18.06-ce. When used with
deploy:
endpoint_mode: dnsrr
The IP address of the bar is visible to foo, which is strange. It looks as if docker started serving internal services using ingress :-O
I think we are in the same boat (although the title of this issue should be re-worded).
Effectively, weave is not working in swarm mode at all, yet overlay works fine in it's place.
We have Docker 18.06.1-ce and launch two stacks wherein a container in each shares the very same network. The only particular networking characteristic we have applied is an alias when the container is attached to the shared network. We do not specify replicas. When I exec in to the containers they can resolve each other but ping reports the destination unreachable:
activemq@graves:/opt/apache-activemq-5.13.4$ ping billing-activemq
PING billing-activemq (10.101.0.6) 56(84) bytes of data.
From graves (10.101.0.9) icmp_seq=1 Destination Host Unreachable
From graves (10.101.0.9) icmp_seq=2 Destination Host Unreachable
From graves (10.101.0.9) icmp_seq=3 Destination Host Unreachable
From graves (10.101.0.9) icmp_seq=4 Destination Host Unreachable
From graves (10.101.0.9) icmp_seq=5 Destination Host Unreachable
From graves (10.101.0.9) icmp_seq=6 Destination Host Unreachable
Here's the routing table if it helps at all:
root@graves:/opt/apache-activemq-5.13.4# routel
target gateway source proto scope dev tbl
default 172.18.0.1 eth1
10.0.9.0 24 10.0.9.49 kernel link eth0
10.101.0.0 28 10.101.0.9 kernel link ethwe0
172.18.0.0 16 172.18.0.13 kernel link eth1
10.0.9.0 broadcast 10.0.9.49 kernel link eth0 local
10.0.9.49 local 10.0.9.49 kernel host eth0 local
10.0.9.255 broadcast 10.0.9.49 kernel link eth0 local
10.101.0.0 broadcast 10.101.0.9 kernel link ethwe0 local
10.101.0.9 local 10.101.0.9 kernel host ethwe0 local
10.101.0.15 broadcast 10.101.0.9 kernel link ethwe0 local
127.0.0.0 broadcast 127.0.0.1 kernel link lo local
127.0.0.0 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.18.0.0 broadcast 172.18.0.13 kernel link eth1 local
172.18.0.13 local 172.18.0.13 kernel host eth1 local
172.18.255.255 broadcast 172.18.0.13 kernel link eth1 local
default unreachable kernel lo
default unreachable kernel lo
Now, if we take down our stack and remove the shared network, then re-create the shared network using the overlay
driver, the problem disappears.
One other thing (that I cannot imagine is related) - the documentation for installing the swarm plugin asks us to install weaveworks/net-plugin:latest_release
which is not found. If we refer to store/weaveworks/net-plugin:latest_release
it does work. Here it is under docker plugin ls
:
ID NAME DESCRIPTION ENABLED
c40c08f82ac5 store/weaveworks/net-plugin:latest_release Weave Net plugin for Docker true
@cm6051 can you run
curl 127.0.0.1:6783/status
on the node with Weave Net running and post the result here please?I will comment that the service discovery piece is working, since ping tries to talk to a specific address. But then nothing comes back so I wonder if the network itself is set up ok.
Didn't see a response for this so I went ahead and ran this since I'm having the same issue.
curl 127.0.0.1:6784/status
Version: 2.4.1 (up to date; next check at 2018/10/23 11:11:38)
Service: router
Protocol: weave 1..2
Name: 5e:53:e8:e9:d3:71(master0)
Encryption: disabled
PeerDiscovery: enabled
Targets: 3
Connections: 19 (18 established, 1 failed)
Peers: 19 (with 342 established connections)
TrustedSubnets: none
Service: ipam
Status: idle
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12
Service: plugin (v2)
My peers are only 1. I tried with docker 18.03, 18.03, and 18.09. Why are my peers and connections get stuck at 0 and 1. Should I be able to curl on 6782 ? 6783? and 6784?
ubuntu@mattjienv-mgr-000000:~/helpers$ curl 127.0.0.1:6782/status
Version: 2.5.0 (up to date; next check at 2018/11/10 03:46:49)
Service: router
Protocol: weave 1..2
Name: 32:64:ac:93:4e:ef(mattjienv-mgr-000000)
Encryption: disabled
PeerDiscovery: enabled Targets: 0 Connections: 0 Peers: 1 TrustedSubnets: none
Service: ipam
Status: idle
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12
ubuntu@mattjienv-mgr-000000:~/helpers$ docker plugin ls ID NAME DESCRIPTION ENABLED 20ca0ab5a7c3 cloudstor:azure cloud storage plugin for Docker true 3a902c69ea06 store/weaveworks/net-plugin:latest_release Weave Net plugin for Docker false 0c35a1de7b79 weaveworks/net-plugin:latest_release Weave Net plugin for Docker true
ubuntu@mattjienv-mgr-000000:~/helpers$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION rbsgcnjiz0x4c0zfvy2t7a1ab mattjienv-es-hdfs-data-000000 Ready Active 18.09.0 wzoy1cbnkhbq9omo4vzgfbfg5 mattjienv-es-hdfs-nn1-000000 Ready Active 18.09.0 l8s0yv8ynhj817ygu51lpw5f5 mattjienv-es-log-000000 Ready Active 18.09.0 a0b6lt4fgktk3hsmyrs884fyk mattjienv-hdfs-data-000000 Ready Active 18.09.0 l74d44jcj37b83o4y5fgiz3jk * mattjienv-mgr-000000 Ready Active Leader 18.09.0 uy0j6m0tf936iqvzzo07u5nw5 mattjienv-storm-000000 Ready Active 18.09.0 ch1itm1nmoveddfao8ov5518i mattjienv-storm-000001 Ready Active 18.09.0 9kc3l2mnj0grbigpmz1h58slm mattjienv-storm-000002 Ready Active 18.09.0 ju1g9w82n2ru0pncw94fa99gn mattjienv-util-000000 Ready Active 18.09.0 ty0cudabz05ukhrs02f49difu mattjienv-zk-kafka-000000 Ready Active 18.09.0
ubuntu@mattjienv-mgr-000000:~/helpers$ docker version Client: Version: 18.09.0 API version: 1.39 Go version: go1.10.4 Git commit: 4d60db4 Built: Wed Nov 7 00:48:57 2018 OS/Arch: linux/amd64 Experimental: false
Server: Docker Engine - Community Engine: Version: 18.09.0 API version: 1.39 (minimum version 1.12) Go version: go1.10.4 Git commit: 4d60db4 Built: Wed Nov 7 00:16:44 2018 OS/Arch: linux/amd64 Experimental: false
When you define a service publishing a port like ports: - 8003:80
, Docker Swarm returns its "ingress" routing mesh address for that port. See https://docs.docker.com/engine/swarm/ingress/.
These are virtual IP addresses, only defined for that specific port, which is why ping
doesn't work. Ping doesn't have port numbers.
As noted later "it's work if i use endpoint_mode: dnsrr" - this mode turns off the virtual IPs and returns the real underlying IPs of the containers. See https://docs.docker.com/engine/swarm/ingress/#without-the-routing-mesh.
I don't think there are any other points unanswered in comments, relating to the title of this issue.
@mploschiavo I cannot see the relevance of your comment to this issue. Suggest you open a new issue with the specifics.
Whilst ping may not actually work and ICMP may not make it through, it is an easy was of resolving an address when most images don't have dig or nslookup. We are seeing the same issue - the address resolved on the 'client' container is always one lower then the 'server' container's IP address in the last octet. The IP addresses returned are also on the weave subnet, not the ingress subnet. Docker 18.09.
I’m talking about the addresses created by Docker for the purpose of routing requests inside the cluster. The “ingress network” is for routing requests that arrive at the host.
?Ok.
So what is the 'correct' way of setting up our scenario. It seems from github that there are a number of people in the same boat and I believe I have followed the published instructions.
From: Bryan Boreham notifications@github.com Sent: 12 November 2018 19:50 To: weaveworks/weave Cc: Smith, David (09); Comment Subject: Re: [weaveworks/weave] Service Discovery on Docker Swarm not working (#3382)
I'm talking about the addresses created by Docker for the purpose of routing requests inside the cluster. The "ingress network" is for routing requests that arrive at the host.
- You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/weaveworks/weave/issues/3382#issuecomment-438007063, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AKb3XOq5WElRNNBplMkjWJAGXENoLniyks5uudDtgaJpZM4WD_t_.
This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com
The contents of this e-mail are confidential and are intended only for the use of the recipient(s) unless otherwise indicated. If you have received this e-mail in error, please notify the sender(s) immediately by telephone. Please destroy and delete the message from your computer. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and/or publication of this e-mail is strictly prohibited unless expressly authorised by the sender(s). No person, without written confirmation of the contents of this e-mail, should rely on it. Whilst this e-mail and the information it contains are supplied in good faith, no member of the Balfour Beatty plc group of companies shall be under any liability in respect of the contents of this e-mail or for any reliance the recipient may place on it. This e-mail is sent for information purposes only and shall not have the effect of creating a contract between the parties. The following companies have their registered office at 5 Churchill Place, Canary Wharf, London E14 5HU: Balfour Beatty Rail Limited (with registered no. 01982627), Balfour Beatty Rail Infrastructure Services Limited (with registered no. 0772439), Balfour Beatty Rail Projects Limited (with registered no. 00772437), Birse Rail Limited (with registered no. 04319685), Birse Metro Limited (with registered no. 04319686), Balfour Beatty Rail Technologies Limited (with registered no. 00235437), Balfour Beatty Civil & Construction Plant Services Limited (with registered no. 02449888) and Balfour Beatty Rail Corporate Services Limited (with registered no. 03110383).
Germany Balfour Beatty Rail GmbH Garmischer Str. 35 81373 München Deutschland Sitz der Gesellschaft: München Handelsregister: HRB 133768 Geschäftsführer Dr. Heike Albrecht
Warning: Although the company has taken reasonable precautions to ensure no viruses or other malware are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com
It’s all correct, in the sense that Docker Swarm is doing what it is designed to do.
If you use the dns round-robin mode, or tell if you don’t have any ports, then it behaves differently.
?It would be a big help if that was mentioned in the docs.
So I need to use dnsrr if I have any ports exposed on the database server?
From: Bryan Boreham notifications@github.com Sent: 12 November 2018 20:29 To: weaveworks/weave Cc: Smith, David (09); Comment Subject: Re: [weaveworks/weave] Service Discovery on Docker Swarm not working (#3382)
It's all correct, in the sense that Docker Swarm is doing what it is designed to do.
If you use the dns round-robin mode, or tell if you don't have any ports, then it behaves differently.
- You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/weaveworks/weave/issues/3382#issuecomment-438018121, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AKb3XBxiKMU7N90OhR7RQPI9kTR0t6t0ks5uudofgaJpZM4WD_t_.
This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com
The contents of this e-mail are confidential and are intended only for the use of the recipient(s) unless otherwise indicated. If you have received this e-mail in error, please notify the sender(s) immediately by telephone. Please destroy and delete the message from your computer. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and/or publication of this e-mail is strictly prohibited unless expressly authorised by the sender(s). No person, without written confirmation of the contents of this e-mail, should rely on it. Whilst this e-mail and the information it contains are supplied in good faith, no member of the Balfour Beatty plc group of companies shall be under any liability in respect of the contents of this e-mail or for any reliance the recipient may place on it. This e-mail is sent for information purposes only and shall not have the effect of creating a contract between the parties. The following companies have their registered office at 5 Churchill Place, Canary Wharf, London E14 5HU: Balfour Beatty Rail Limited (with registered no. 01982627), Balfour Beatty Rail Infrastructure Services Limited (with registered no. 0772439), Balfour Beatty Rail Projects Limited (with registered no. 00772437), Birse Rail Limited (with registered no. 04319685), Birse Metro Limited (with registered no. 04319686), Balfour Beatty Rail Technologies Limited (with registered no. 00235437), Balfour Beatty Civil & Construction Plant Services Limited (with registered no. 02449888) and Balfour Beatty Rail Corporate Services Limited (with registered no. 03110383).
Germany Balfour Beatty Rail GmbH Garmischer Str. 35 81373 München Deutschland Sitz der Gesellschaft: München Handelsregister: HRB 133768 Geschäftsführer Dr. Heike Albrecht
Warning: Although the company has taken reasonable precautions to ensure no viruses or other malware are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com
I changed the service definition for db to remove the exposed ports and use dnsrr
db:
image: mariadb
hostname: db.weave.local
deploy:
endpoint_mode: dnsrr
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
environment:
- MYSQL_ROOT_PASSWORD=example
- MYSQL_DATABASE=keycloak
- MYSQL_USER=keycloak
- MYSQL_PASSWORD=password
networks:
- ensyte
deploy:
placement:
constraints:
- node.labels.type == primary
- node.role == worker
volumes:
- mariadata:/var/lib/mysql
I get exactly the same symptoms
drsmith@swarm01:~$ docker exec -it fbb1db037e32 bash root@db:/# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 14220: ethwe0@if14221: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default link/ether 52:a6:02:95:4b:c7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.13.20/16 brd 192.168.255.255 scope global ethwe0 valid_lft forever preferred_lft forever 14222: eth0@if14223: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.18.0.3/16 brd 172.18.255.255 scope global eth0 valid_lft forever preferred_lft forever
[jboss@kc1 ~]$ ping db PING db (192.168.13.19) 56(84) bytes of data.
From: Bryan Boreham notifications@github.com Sent: 12 November 2018 20:29 To: weaveworks/weave Cc: Smith, David (09); Comment Subject: Re: [weaveworks/weave] Service Discovery on Docker Swarm not working (#3382)
It's all correct, in the sense that Docker Swarm is doing what it is designed to do.
If you use the dns round-robin mode, or tell if you don't have any ports, then it behaves differently.
- You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/weaveworks/weave/issues/3382#issuecomment-438018121, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AKb3XBxiKMU7N90OhR7RQPI9kTR0t6t0ks5uudofgaJpZM4WD_t_.
This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com
The contents of this e-mail are confidential and are intended only for the use of the recipient(s) unless otherwise indicated. If you have received this e-mail in error, please notify the sender(s) immediately by telephone. Please destroy and delete the message from your computer. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and/or publication of this e-mail is strictly prohibited unless expressly authorised by the sender(s). No person, without written confirmation of the contents of this e-mail, should rely on it. Whilst this e-mail and the information it contains are supplied in good faith, no member of the Balfour Beatty plc group of companies shall be under any liability in respect of the contents of this e-mail or for any reliance the recipient may place on it. This e-mail is sent for information purposes only and shall not have the effect of creating a contract between the parties. The following companies have their registered office at 5 Churchill Place, Canary Wharf, London E14 5HU: Balfour Beatty Rail Limited (with registered no. 01982627), Balfour Beatty Rail Infrastructure Services Limited (with registered no. 0772439), Balfour Beatty Rail Projects Limited (with registered no. 00772437), Birse Rail Limited (with registered no. 04319685), Birse Metro Limited (with registered no. 04319686), Balfour Beatty Rail Technologies Limited (with registered no. 00235437), Balfour Beatty Civil & Construction Plant Services Limited (with registered no. 02449888) and Balfour Beatty Rail Corporate Services Limited (with registered no. 03110383).
Germany Balfour Beatty Rail GmbH Garmischer Str. 35 81373 München Deutschland Sitz der Gesellschaft: München Handelsregister: HRB 133768 Geschäftsführer Dr. Heike Albrecht
Warning: Although the company has taken reasonable precautions to ensure no viruses or other malware are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com
When you define a service publishing a port like
ports: - 8003:80
, Docker Swarm returns its "ingress" routing mesh address for that port. See https://docs.docker.com/engine/swarm/ingress/.These are virtual IP addresses, only defined for that specific port, which is why
ping
doesn't work. Ping doesn't have port numbers.As was mentioned above, this issue is not related to exporting ports outside of docker cluster using ingress. Returning of virtual mesh IP inside the docker cluster is the actual issue we are dealing with here. It doesn't make any sense to do such thing when you have nodes inside one virtual network (i.e. inside defined network for your stack). I am not sure, what is that virtual IP even good for. Maybe it makes sense for communication between different defined networks inside the swarm cluster? I don't know.
Anyway, inside one defined network it is wrong to resolve virtual IP of the container for anything, because then the container itself thinks it is reachable on one IP, while others on the same network see it running on completely different IP, which is obviously an issue for some services.
Returning of virtual mesh IP inside the docker cluster is the actual issue we are dealing with here. It doesn't make any sense to do such thing
This is a Docker behaviour, not something we have control of at the network layer.
That said, I would expect the virtual IP to re-route to the container IP somehow. I don't know enough about Swarm mesh routing to say where it's going wrong.
as a workaround, one should use legacy mode (v1) and create an overlay network for the swarm. Weave Net comes handy when I need to establish a VPN (between my VM's private network and a public VM), but to make swarming work over it, an overlay network should be utilized. Weave plugin (V2) simply won't do. But with overlay + Weave's legacy mode, remote hosts find services across the entire swarm and service are properly routed, even if I drain the swarm's manager node, curl-requests pass over to workers and back just fine.
Wasted a few days to figure this out. I hope the legacy mode will stay for a while, because it's awesome. I can't say the same about V2 :=/
Here are the few important commands: on a manager node (I am using 192.77... because the default conflicts with my network):
$ weave launch --ipalloc-range 192.77.1.0/24
$ docker swarm init --advertise-addr $(weave expose) docker swarm join --token SWMTKN-1-3ljuqvtqgbli21dx1i1z4oar5kny13ymic6pv3dm8cv7ya7oh3-au1j2ud72xl3q3d0gp0j8bi39 192.77.1.1:2377
$ docker network create -d overlay --ip-range 192.77.1.0/24 --subnet 192.77.1.0/24 weave2
on worker nodes: $ weave launch --ipalloc-range 192.77.1.0/24 <specify manager's node ip by which it's accessible from the workers> $ weave expose
$ #Here use the output from the manager's swarm init above in order to join the swarm
Now can create services and stacks on the manager node and enjoy proper routing. The WeaveNet documentation is misleading that one can only use V2 plugin for swarm mode. Not true. In fact, the only way I could make it work with Legacy plus the overlay driver networking
I have two identical Swarms, except one is running Docker 18.03.1-ce
and the other 18.06.1-ce
.
With version 18.06.1-ce
I get exactly this same issue, but on 18.03.1-ce
it works as expected. As a workaround I'm considering downgrading to 18.03.1-ce
.
Downgrading to Docker 18.03.1-ce solved the issue.
Hi there,
I'm trying to get service discovery working with weave-net.
I'm using a docker stack file like this:
I would expect to be able to ping the service names "nginx3" and "nginx4" from containers in this stack, but it doesn't work:
The error Unable to find load balancing endpoint for network mh6gnsbfiatinqiluv6aterbb occurs - I guess this is a symptom of the problem...
A similar stack file, not using weave-net, is like this:
With this one it works OK:
Versions:
Logs:
Network: