Open stevvooe opened 8 years ago
@antoineco An upstream patch was merged in https://github.com/moby/moby/pull/39204 on June 2, that will allow us to set the hostname and have it accessible from other containers in the network.
So now we're just waiting for that to make it upstream into docker I guess?
That only makes the host name resolvable, which is useful but doesn't help to generate predicable names for task replicas.
Task.Slot
may help though. I haven't noticed that parameter before so it may be new (?)
Example:
hostname: "myapp.{{.Task.Slot}}"
should set hostnames like myapp.1
, myapp.2
, ...
edit: it works 👌
Can’t wait for this massive feature.
@antoineco what the version on the build you have that works? one of the nightlies?
@deftdawg no, Docker 19.03. Anything recent enough should do, the feature seems older than I though.
Oh you were saying resolution by hostname worked, rather you are saying setting hostname now works...
Can now both set hostname (i.e. myapp-{{.Task.Slot}}), and resolve by {{.Task.Name}} neither of which worked on 18.x I came from...
My goal is to get DB nodes to cluster with each other without requiring an external service like etcd or consul - to do that they need to resolve by something-{{ Task.Slot }} because that is both predictable and stable (task slot 1 is respawn if it dies; where as taks.name will be something completely random)... this is so close to actually being usable...
Here's what I did to test:
$ docker --version
Docker version 19.03.6, build 369ce74a3c
$ cat ssh-cluster.yaml
version: '3.7'
services:
ssh:
# use ssh containers because they are easy to shell into to poke around
image: rastasheep/ubuntu-sshd:latest
hostname: 'myapp-{{.Task.Slot}}'
ports:
- '2222:22'
logging:
driver: json-file
deploy:
replicas: 3
# Debug stuff below 'docker inspect <container> | grep X_' to see values
environment:
X_NODE_ID: '{{.Node.ID}}'
X_NODE_HOSTNAME: '{{.Node.Hostname}}'
X_NODE_PLATFROM: '{{.Node.Platform}}'
X_NODE_PLATFROM_ARCHITECTURE: '{{.Node.Platform.Architecture}}'
X_NODE_PLATFROM_OS: '{{.Node.Platform.OS}}'
X_SERVICE_ID: '{{.Service.ID}}'
X_SERVICE_NAMES: '{{.Service.Name}}'
X_SERVICE_LABELS: '{{.Service.Labels}}'
X_SERVICE_LABEL_STACK_NAMESPACE: '{{index .Service.Labels "com.docker.stack.namespace"}}'
X_SERVICE_LABEL_STACK_IMAGE: '{{index .Service.Labels "com.docker.stack.image"}}'
X_SERVICE_LABEL_CUSTOM: '{{index .Service.Labels "service.label"}}'
X_TASK_ID: '{{.Task.ID}}'
X_TASK_NAME: '{{.Task.Name}}'
X_TASK_SLOT: '{{.Task.Slot}}'
$ docker stack deploy ssh --compose-file ssh-cluster.yaml
Updating service ssh_ssh (id: 7iim2c7je2wati3bwo4j6odcs)
$ docker stack ps -f "desired-state=running" ssh
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
vishpj0pwrnk ssh_ssh.1 rastasheep/ubuntu-sshd:latest Chromebox Running Running 50 seconds ago
f7d3cxvlar1r ssh_ssh.2 rastasheep/ubuntu-sshd:latest Chromebox Running Running 32 seconds ago
m7v3mmouhplb ssh_ssh.3 rastasheep/ubuntu-sshd:latest Chromebox Running Running 41 seconds ago
$ docker ps -f name=ssh
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b869dab3a94 rastasheep/ubuntu-sshd:latest "/usr/sbin/sshd -D" About a minute ago Up About a minute 22/tcp ssh_ssh.2.f7d3cxvlar1rob0m6dg74ek7j
f6ba9f573b38 rastasheep/ubuntu-sshd:latest "/usr/sbin/sshd -D" About a minute ago Up About a minute 22/tcp ssh_ssh.3.m7v3mmouhplbhj4imbts7alki
335f9fed407f rastasheep/ubuntu-sshd:latest "/usr/sbin/sshd -D" About a minute ago Up About a minute 22/tcp ssh_ssh.1.vishpj0pwrnko5xr47kgxqv0r
$ docker inspect $(docker ps -f name=ssh | grep -v IMAGE | head -1 | cut -d" " -f1) | grep X_
"X_NODE_HOSTNAME=Chromebox",
"X_NODE_ID=j92dthzieshbvx2exi30hq2dy",
"X_NODE_PLATFROM={x86_64 linux}",
"X_NODE_PLATFROM_ARCHITECTURE=x86_64",
"X_NODE_PLATFROM_OS=linux",
"X_SERVICE_ID=7iim2c7je2wati3bwo4j6odcs",
"X_SERVICE_LABELS=map[com.docker.stack.image:rastasheep/ubuntu-sshd:latest com.docker.stack.namespace:ssh]",
"X_SERVICE_LABEL_CUSTOM=",
"X_SERVICE_LABEL_STACK_IMAGE=rastasheep/ubuntu-sshd:latest",
"X_SERVICE_LABEL_STACK_NAMESPACE=ssh",
"X_SERVICE_NAMES=ssh_ssh",
"X_TASK_ID=f7d3cxvlar1rob0m6dg74ek7j",
"X_TASK_NAME=ssh_ssh.2.f7d3cxvlar1rob0m6dg74ek7j",
"X_TASK_SLOT=2",
$ ssh root@localhost -p 2222 # password 'root'
root@localhost's password:
root@myapp-2:~# ssh myapp-1
ssh: Could not resolve hostname myapp-1: No address associated with hostname
root@myapp-2:~# ssh ssh_ssh.1
ssh: Could not resolve hostname ssh_ssh.1: Name or service not known
root@myapp-2:~# ssh ssh_ssh.1.vishpj0pwrnko5xr47kgxqv0r
The authenticity of host 'ssh_ssh.1.vishpj0pwrnko5xr47kgxqv0r (10.0.2.29)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
Are you sure you want to continue connecting (yes/no)? ^C
root@myapp-2:~# exit
logout
Connection to localhost closed.
@deftdawg No, I meant setting the hostname works. Here is a usage example: https://github.com/deviantony/docker-elk/wiki/Elasticsearch-cluster#swarm-mode
I'm not sure if it works for everything, but it works with env vars.
So this works in Docker built from HEAD, the patch that @Penagwin referred to was merged into Docker with this commit (849af5e343b5a2ca691758a2b8518243968b3a00) on June 2nd 2019, so the next major release (20.x?) should work out of the box... Wonder how much longer we'll have to wait.
Here's the retest against docker head:
root@localhost's password:
Last login: Thu Mar 5 16:30:18 2020 from 10.0.1.5
root@myapp-1:~# ssh myapp-2
The authenticity of host 'myapp-2 (10.0.1.4)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
Are you sure you want to continue connecting (yes/no)? ^C
root@myapp-1:~# ssh myapp-3
The authenticity of host 'myapp-3 (10.0.1.5)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
Are you sure you want to continue connecting (yes/no)? ^C
root@myapp-1:~#
So is there any workaround to achieve the same goal about hostname resolving before the new release launch?
So is there any workaround to achieve the same goal about hostname resolving before the new release launch?
I'm struggling with this. Did you find any workaround?
Hello! Did you find a solution? A feature to predefine container names in a cluster would be very useful.
I would like to see the network environment in the csynk2 service stack. For example: the csync2 stack consists of one declared service (storage-replicator) with 5 replicas, as a result, I would like to be able to access neighboring replicas inside the service at a convenient address {{service_name}}.{{replica_id}} (storage_replicator.1, storage_replicator.2, storage_replicator.3, etc.).
This feature in swarm mode will significantly reduce the size of yml files and make them more unified and simple.
It works for me in Debian with official packages from Docker for quite some time now.
@JKJameson how it works? I can't find this in documentation...
@TuzelKO In a swarm, with this line in the docker-compose.yml for each service.
hostname: "{{.Service.Name}}.{{.Task.Slot}}"
@JKJameson wow... Thanks!
@JKJameson Please is there any further documentation on this and also the tasks.dockerservice
dns entries?
While there has been discussion in https://github.com/docker/docker/pull/24973 and https://github.com/docker/swarmkit/issues/192, the adoption of a clear schema for mapping service resources into the DNS space is unclear.
The following presents a schema for mapping cluster-level FQDNs from various components:
<cluster>
<cluster>
local
,cluster0
<namespace>
<namespace>.<cluster>
production.cluster0
,development.local
,system
<node>
<node>.<cluster>
node0.local
<job>
<job>.<namespace>.<cluster>
job0.production.cluster0
<slot>
<slot id>.<job>.<namespace>.<cluster>
1.job0.production.cluster0
<task>
<task id>.<slot id>.<job>.<namespace>.<cluster>
abcdef.1.job0.production.cluster0
@mavenugo @mrjana @aluzzardi