Open JonDum opened 6 years ago
I've never used swarm mode and I'm not familiar with how it configures the containers, so it's likely that Dinghy doesn't fully support it. Currently, the DNS names are based on the com.docker.compose.project
and com.docker.compose.service
labels that docker-compose sets on containers, or alternatively you can explicitly set a hostname by setting the VIRTUAL_HOST
env var on a container. You can see the logic at the bottom of the nginx template: https://github.com/codekitchen/dinghy-http-proxy/blob/master/nginx.tmpl#L238
If swarm sets similar labels on containers, it may be as easy as just extending the logic in that template. But I'm not positive that's all that would be required, for instance I'm not sure if swarm sets up the virtual networks in a similar way to what compose does.
But are you saying you also can't connect to just http://$(dinghy ip)
? That should respond with a "Welcome to the Dinghy HTTP Proxy" placeholder page, if it doesn't do that then something more might be going on.
But are you saying you also can't connect to just http://$(dinghy ip)?
Yea that's what's super weird. Nothing is connecting, but I can confirm they are running like normal through docker service log <stack>_<service>
``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 917babc8c051 flyvana/apiworker:dev "npm run dev" 10 hours ago Up 10 hours 3000/tcp, 9229-9230/tcp api_worker.1.svf5grunqquoz6vvtjzf4fxoa f85981f98108 flyvana/webpackdevserver:dev "npm run start" 10 hours ago Up 10 hours 8080/tcp api_webpackdevserver.1.9pokhrejn9us389uz0pz2pjfa 3bc40a061237 arangodb/arangodb:latest "/entrypoint.sh aran…" 11 hours ago Up 11 hours 8529/tcp api_arango.1.t3z9rj3nlvleio5o6zdrmu6z6 7050c3293cc5 docker:latest "docker-entrypoint.s…" 11 hours ago Up 11 hours prune_images.sgttlkj8zrbxpdm0ave48o915.8mby1burjb5sz4gxfdtl41nzs 3f222e527e60 emilevauge/whoami:latest "/whoamI" 11 hours ago Up 11 hours 80/tcp proxy_whoami.1.l8u8gzvanyne6pohkvwt4ik5x 78a3bc4068c6 traefik:1.7-alpine "/entrypoint.sh --ap…" 11 hours ago Up 11 hours 80/tcp proxy_traefik.1.iyym9ejhreuxx01eporsyjukl bf5657d1bafa rabbitmq:3.7.8-management-alpine "docker-entrypoint.s…" 11 hours ago Up 11 hours 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 25672/tcp api_rabbitmq.1.4t4rtjahmounak20sklr909g7 aabf8bce9c63 codekitchen/dinghy-http-proxy:2.5 "/app/docker-entrypo…" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 19322/tcp, 0.0.0.0:19322->19322/udp dinghy_http_proxy ```
``` WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded. forego | starting nginx.1 on port 5000 forego | starting dockergen.1 on port 5100 forego | starting dnsmasq.1 on port 5300 dockergen.1 | 2018/11/27 06:35:14 Generated '/etc/nginx/conf.d/default.conf' from 1 containers dockergen.1 | 2018/11/27 06:35:14 Running '/app/reload-nginx' dockergen.1 | 2018/11/27 06:35:14 [/app/reload-nginx]: currently in 1 networks, found 1 bridge networks, 0 to join, 0 to leave dockergen.1 | 2018/11/27 06:35:14 Watching docker events dockergen.1 | 2018/11/27 06:35:14 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' dnsmasq.1 | dnsmasq: started, version 2.76 cachesize 150 dnsmasq.1 | dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify 2018/11/27 06:35:46 [notice] 59#59: signal process started Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time dhparam generation complete, reloading nginx dockergen.1 | 2018/11/27 06:39:27 Received event start for container bf5657d1bafa dockergen.1 | 2018/11/27 06:39:27 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' dockergen.1 | 2018/11/27 06:39:32 Received event start for container 78a3bc4068c6 dockergen.1 | 2018/11/27 06:39:32 Received event start for container 3f222e527e60 dockergen.1 | 2018/11/27 06:39:32 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' dockergen.1 | 2018/11/27 06:39:32 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' dockergen.1 | 2018/11/27 06:39:45 Received event start for container 7050c3293cc5 dockergen.1 | 2018/11/27 06:39:45 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' dockergen.1 | 2018/11/27 06:39:48 Received event start for container 3bc40a061237 dockergen.1 | 2018/11/27 06:39:48 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' dockergen.1 | 2018/11/27 06:52:29 Received event start for container f85981f98108 dockergen.1 | 2018/11/27 06:52:29 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' dockergen.1 | 2018/11/27 06:52:30 Received event start for container 917babc8c051 dockergen.1 | 2018/11/27 06:52:30 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '/app/reload-nginx' ```
ArangoDB is fairly straight forward so maybe that's a good one to look at — arangodb/arangodb:latest starts up the db and binds to :8529 (from its logs api_arango.1.t3z9rj3nlvle@dinghy | INFO using endpoint 'http+tcp://0.0.0.0:8529' for non-encrypted requests
)
stacks/api.yml:
arango:
image: arangodb/arangodb:latest
ports:
- 8529:8529
networks:
- backend
volumes:
- arango_data:/var/lib/arangodb3
- /var/run/docker.sock:/var/run/docker.sock
Could it be the attachable networks feature of docker swarm that is preventing them from communicating?
``` [ { "ID": "4aug6ld7s68zadmgpm1h6w7r5", "Version": { "Index": 2191 }, "CreatedAt": "2018-11-27T06:39:05.196628789Z", "UpdatedAt": "2018-11-27T06:51:44.985922528Z", "Spec": { "Name": "api_arango", "Labels": { "com.docker.stack.image": "arangodb/arangodb:latest", "com.docker.stack.namespace": "api" }, "TaskTemplate": { "ContainerSpec": { "Image": "arangodb/arangodb:latest@sha256:356e09720cc5acc2a81b9be9da8537386a108e68e6c11618531bfc4cea0c2717", "Labels": { "com.docker.stack.namespace": "api" }, "Env": [ "ARANGO_NO_AUTH=1" ], "Privileges": { "CredentialSpec": null, "SELinuxContext": null }, "Mounts": [ { "Type": "volume", "Source": "api_arango_data", "Target": "/var/lib/arangodb3", "VolumeOptions": { "Labels": { "com.docker.stack.namespace": "api" } } }, { "Type": "bind", "Source": "/var/run/docker.sock", "Target": "/var/run/docker.sock" } ], "StopGracePeriod": 10000000000, "DNSConfig": {}, "Isolation": "default" }, "Resources": {}, "RestartPolicy": { "Condition": "any", "Delay": 5000000000, "MaxAttempts": 0 }, "Placement": { "Platforms": [ { "Architecture": "amd64", "OS": "linux" } ] }, "Networks": [ { "Target": "zzmt7vbnwjc4g5uacbgwlj5cz", "Aliases": [ "arango" ] } ], "ForceUpdate": 0, "Runtime": "container" }, "Mode": { "Replicated": { "Replicas": 1 } }, "UpdateConfig": { "Parallelism": 1, "FailureAction": "pause", "Monitor": 5000000000, "MaxFailureRatio": 0, "Order": "stop-first" }, "RollbackConfig": { "Parallelism": 1, "FailureAction": "pause", "Monitor": 5000000000, "MaxFailureRatio": 0, "Order": "stop-first" }, "EndpointSpec": { "Mode": "vip", "Ports": [ { "Protocol": "tcp", "TargetPort": 8529, "PublishedPort": 8529, "PublishMode": "ingress" } ] } }, "PreviousSpec": { "Name": "api_arango", "Labels": { "com.docker.stack.image": "arangodb/arangodb:latest", "com.docker.stack.namespace": "api" }, "TaskTemplate": { "ContainerSpec": { "Image": "arangodb/arangodb:latest@sha256:356e09720cc5acc2a81b9be9da8537386a108e68e6c11618531bfc4cea0c2717", "Labels": { "com.docker.stack.namespace": "api" }, "Env": [ "ARANGO_NO_AUTH=1" ], "Privileges": { "CredentialSpec": null, "SELinuxContext": null }, "Mounts": [ { "Type": "volume", "Source": "api_arango_data", "Target": "/var/lib/arangodb3", "VolumeOptions": { "Labels": { "com.docker.stack.namespace": "api" } } }, { "Type": "bind", "Source": "/var/run/docker.sock", "Target": "/var/run/docker.sock" } ], "Isolation": "default" }, "Resources": {}, "Placement": { "Platforms": [ { "Architecture": "amd64", "OS": "linux" } ] }, "Networks": [ { "Target": "zzmt7vbnwjc4g5uacbgwlj5cz", "Aliases": [ "arango" ] } ], "ForceUpdate": 0, "Runtime": "container" }, "Mode": { "Replicated": { "Replicas": 1 } }, "EndpointSpec": { "Mode": "vip", "Ports": [ { "Protocol": "tcp", "TargetPort": 8529, "PublishedPort": 8529, "PublishMode": "ingress" } ] } }, "Endpoint": { "Spec": { "Mode": "vip", "Ports": [ { "Protocol": "tcp", "TargetPort": 8529, "PublishedPort": 8529, "PublishMode": "ingress" } ] }, "Ports": [ { "Protocol": "tcp", "TargetPort": 8529, "PublishedPort": 8529, "PublishMode": "ingress" } ], "VirtualIPs": [ { "NetworkID": "1s59n2aae073fi46hgarnkj2s", "Addr": "10.255.0.3/16" }, { "NetworkID": "zzmt7vbnwjc4g5uacbgwlj5cz", "Addr": "10.0.1.2/24" } ] } } ] ```
```js [ { "Id": "3bc40a06123768d9ade1ff9a74847040df8624822452a4bee8721ce946925f39", "Created": "2018-11-27T06:39:47.944277074Z", "Path": "/entrypoint.sh", "Args": [ "arangod" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 5120, "ExitCode": 0, "Error": "", "StartedAt": "2018-11-27T06:39:48.696824926Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:21b646b10c7ef553ad043860ac5f872be51f3d08b12ddc95cae6ff021a720415", "ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/3bc40a06123768d9ade1ff9a74847040df8624822452a4bee8721ce946925f39/resolv.conf", "HostnamePath": "/mnt/sda1/var/lib/docker/containers/3bc40a06123768d9ade1ff9a74847040df8624822452a4bee8721ce946925f39/hostname", "HostsPath": "/mnt/sda1/var/lib/docker/containers/3bc40a06123768d9ade1ff9a74847040df8624822452a4bee8721ce946925f39/hosts", "LogPath": "/mnt/sda1/var/lib/docker/containers/3bc40a06123768d9ade1ff9a74847040df8624822452a4bee8721ce946925f39/3bc40a06123768d9ade1ff9a74847040df8624822452a4bee8721ce946925f39-json.log", "Name": "/api_arango.1.t3z9rj3nlvleio5o6zdrmu6z6", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "default", "PortBindings": {}, "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": null, "DnsOptions": null, "DnsSearch": null, "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "default", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": null, "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "Mounts": [ { "Type": "volume", "Source": "api_arango_data", "Target": "/var/lib/arangodb3", "VolumeOptions": { "Labels": { "com.docker.stack.namespace": "api" } } }, { "Type": "bind", "Source": "/var/run/docker.sock", "Target": "/var/run/docker.sock" } ], "MaskedPaths": [ "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/asound", "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/mnt/sda1/var/lib/docker/overlay2/07eddce7f89ae1ef33a9ed15ad7b0ddd4ffab09c10916ef40bcd56fae9f47f20-init/diff:/mnt/sda1/var/lib/docker/overlay2/49b393370fb196488340f807a43097c4103f3fa756aea69deef47dafb00fa5a8/diff:/mnt/sda1/var/lib/docker/overlay2/40b20c3608af262d9c7880c047cf377a88cee5c936c244ffa60f16e8bdd628e7/diff:/mnt/sda1/var/lib/docker/overlay2/6bbccf98c4ad7ee4f8a6027673b2746b8134da9a1dcf2f41189c9cbdeb7eedbc/diff:/mnt/sda1/var/lib/docker/overlay2/63a18802c5ea29758d3890631124a23d01fa5ca4f0e708ad75fa9d50adee1662/diff:/mnt/sda1/var/lib/docker/overlay2/7c1be38b946714c62c93cb69fe46cc51ad08ab0dce003feebbad5b436c501211/diff:/mnt/sda1/var/lib/docker/overlay2/4f7a86e0ab50d6f32f8f21e6d1c846ca9742f95c2cb1909fd4b5389a564adeab/diff:/mnt/sda1/var/lib/docker/overlay2/27d82c9e3250ced495de8dd68972eaf77d6123cc1ab2edeb7f7423bdf3f22ed9/diff", "MergedDir": "/mnt/sda1/var/lib/docker/overlay2/07eddce7f89ae1ef33a9ed15ad7b0ddd4ffab09c10916ef40bcd56fae9f47f20/merged", "UpperDir": "/mnt/sda1/var/lib/docker/overlay2/07eddce7f89ae1ef33a9ed15ad7b0ddd4ffab09c10916ef40bcd56fae9f47f20/diff", "WorkDir": "/mnt/sda1/var/lib/docker/overlay2/07eddce7f89ae1ef33a9ed15ad7b0ddd4ffab09c10916ef40bcd56fae9f47f20/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "volume", "Name": "api_arango_data", "Source": "/mnt/sda1/var/lib/docker/volumes/api_arango_data/_data", "Destination": "/var/lib/arangodb3", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "bind", "Source": "/var/run/docker.sock", "Destination": "/var/run/docker.sock", "Mode": "", "RW": true, "Propagation": "rprivate" }, { "Type": "volume", "Name": "5cdd0061aae979278b70c00c5c1b24b3d20b6c6fe5cc86a56f64a62457935940", "Source": "/mnt/sda1/var/lib/docker/volumes/5cdd0061aae979278b70c00c5c1b24b3d20b6c6fe5cc86a56f64a62457935940/_data", "Destination": "/var/lib/arangodb3-apps", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "3bc40a061237", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "8529/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "ARANGO_NO_AUTH=1", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "ARCHITECTURE=amd64", "DEB_PACKAGE_VERSION=1", "ARANGO_VERSION=3.3.19", "ARANGO_URL=https://download.arangodb.com/arangodb33/Debian_9.0", "ARANGO_PACKAGE=arangodb3-3.3.19-1_amd64.deb", "ARANGO_PACKAGE_URL=https://download.arangodb.com/arangodb33/Debian_9.0/amd64/arangodb3-3.3.19-1_amd64.deb", "ARANGO_SIGNATURE_URL=https://download.arangodb.com/arangodb33/Debian_9.0/amd64/arangodb3-3.3.19-1_amd64.deb.asc" ], "Cmd": [ "arangod" ], "ArgsEscaped": true, "Image": "arangodb/arangodb:latest@sha256:356e09720cc5acc2a81b9be9da8537386a108e68e6c11618531bfc4cea0c2717", "Volumes": { "/var/lib/arangodb3": {}, "/var/lib/arangodb3-apps": {} }, "WorkingDir": "", "Entrypoint": [ "/entrypoint.sh" ], "OnBuild": null, "Labels": { "com.docker.stack.namespace": "api", "com.docker.swarm.node.id": "sgttlkj8zrbxpdm0ave48o915", "com.docker.swarm.service.id": "4aug6ld7s68zadmgpm1h6w7r5", "com.docker.swarm.service.name": "api_arango", "com.docker.swarm.task": "", "com.docker.swarm.task.id": "t3z9rj3nlvleio5o6zdrmu6z6", "com.docker.swarm.task.name": "api_arango.1.t3z9rj3nlvleio5o6zdrmu6z6" } }, "NetworkSettings": { "Bridge": "", "SandboxID": "935f0f3a90bbf0568f3642632f91c7e1a070edba990923bd9ebf17bc0b4d9b8a", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "8529/tcp": null }, "SandboxKey": "/var/run/docker/netns/935f0f3a90bb", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "api_backend": { "IPAMConfig": { "IPv4Address": "10.0.1.3" }, "Links": null, "Aliases": [ "3bc40a061237" ], "NetworkID": "zzmt7vbnwjc4g5uacbgwlj5cz", "EndpointID": "c8e8d676cf423fcd26475605f220374552f4b1663fcc216628dc4d032080a09d", "Gateway": "", "IPAddress": "10.0.1.3", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:0a:00:01:03", "DriverOpts": null }, "ingress": { "IPAMConfig": { "IPv4Address": "10.255.0.4" }, "Links": null, "Aliases": [ "3bc40a061237" ], "NetworkID": "1s59n2aae073fi46hgarnkj2s", "EndpointID": "d13d175214402fc782301bdaf7f3938f9d319c041fb33bf7d0aee4d265444969", "Gateway": "", "IPAddress": "10.255.0.4", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:0a:ff:00:04", "DriverOpts": null } } } } ] ```
And just in case:
```
lo0: flags=8049
Inspecting some of the networks now:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
zzmt7vbnwjc4 api_backend overlay swarm
1d8ed08c89c1 bridge bridge local
6517a530bf12 docker_gwbridge bridge local
79e71a989847 host host local
1s59n2aae073 ingress overlay swarm
5ec7fc22558e none null local
7gve724r7f8v proxy overlay swarm
vucvx3byr0pz prune_default overlay swarm
All of my swarm containers appear to be under the ingress
network. dinghy ip
shows up as a "Peer" in this network.
```js [ { "Name": "ingress", "Id": "1s59n2aae073fi46hgarnkj2s", "Created": "2018-11-27T06:38:59.378040762Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.255.0.0/16", "Gateway": "10.255.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": true, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "3bc40a06123768d9ade1ff9a74847040df8624822452a4bee8721ce946925f39": { "Name": "api_arango.1.t3z9rj3nlvleio5o6zdrmu6z6", "EndpointID": "d13d175214402fc782301bdaf7f3938f9d319c041fb33bf7d0aee4d265444969", "MacAddress": "02:42:0a:ff:00:04", "IPv4Address": "10.255.0.4/16", "IPv6Address": "" }, "78a3bc4068c690f40497a656f08e1f827e400101441660e2d1495af72db3cd7a": { "Name": "proxy_traefik.1.iyym9ejhreuxx01eporsyjukl", "EndpointID": "21348455113e6aad8cd23e009131d6a264a77a28d4f3e9027aed6ee3d83dcc86", "MacAddress": "02:42:0a:ff:00:0e", "IPv4Address": "10.255.0.14/16", "IPv6Address": "" }, "917babc8c051c1ce877ddf4db5bed36ecfe418270900095fbfbf38ca751b72d2": { "Name": "api_worker.1.svf5grunqquoz6vvtjzf4fxoa", "EndpointID": "4c3037997bf7dc5a6a3a46373f7338813347969e45204189484a41f51d106c5f", "MacAddress": "02:42:0a:ff:01:4e", "IPv4Address": "10.255.1.78/16", "IPv6Address": "" }, "bf5657d1bafa1b0262ea5ba69a2431ec0dfd91e1076cab9bb94b52386c532140": { "Name": "api_rabbitmq.1.4t4rtjahmounak20sklr909g7", "EndpointID": "e9855adfd739400b4de974d9bf35ca73dad5f2d4e1df293cab6963157e3712d0", "MacAddress": "02:42:0a:ff:00:06", "IPv4Address": "10.255.0.6/16", "IPv6Address": "" }, "f85981f9810857ed04491e1513a6b168e2eb2312d278ad6762bc13bb48739fe9": { "Name": "api_webpackdevserver.1.9pokhrejn9us389uz0pz2pjfa", "EndpointID": "2b08ada7c275f28b8647d9dc0dc5d6d0eba98ec1f285a076689bea01d3711363", "MacAddress": "02:42:0a:ff:01:4d", "IPv4Address": "10.255.1.77/16", "IPv6Address": "" }, "ingress-sbox": { "Name": "ingress-endpoint", "EndpointID": "7c1de93455c0a1d880c86d14349796ecfab91e6f9781b81ce37748eb676da477", "MacAddress": "02:42:0a:ff:00:02", "IPv4Address": "10.255.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4096" }, "Labels": {}, "Peers": [ { "Name": "777cbef5a9c0", "IP": "192.168.99.100" } ] } ] ```
Whereas dinghy-http-proxy shows up in the bridge network
```js [ { "Name": "bridge", "Id": "1d8ed08c89c17bcd3002f12cc2a814cb38fd3bcfe60e5563a0a5e07489d49540", "Created": "2018-11-27T06:34:54.732493365Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "aabf8bce9c6329f5b9e953b88c53bb2d6f4c88855cc1c1c23d575981b893748e": { "Name": "dinghy_http_proxy", "EndpointID": "046f74d944de660f16dec72ec52d23a37dd2904e7f2b8e28febb916614fad890", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ] ```
Trying all the IPs I find in these, still don't get anything connected 😤
Hello, I'm trying out dinghy because regular ol' docker for mac is slow as molasses with large bind mounts.
My stack consists of several
docker stack deploy
on a docker swarm running just the host machine in single node.Dinghy comes up fine and all my containers seem much faster! 👍
Where I'm stuck is the dns resolving. With vanilla d4m Traefik was bound to localhost:80 so it could automatically proxy to my containers. That doesn't seem to work anymore since I can't figure out what the dns names are supposed to be.
I've tried:
<service_name>
,<service_name.docker>
,<container_name>
<container_name.docker>
and I've tried manually connecting to the ip:port thatdingy ip
reports.Still nothing connects. Does dinghy not support swarm mode?