savishy / docker-examples

A repository of Docker examples from simple to advanced.
4 stars 17 forks source link

docker-swarm-aws: "invalid host specified for playbook iteration" #1

Open savishy opened 8 years ago

savishy commented 8 years ago

In commit c710e97c1e28bab0ffd79b5862d79991ab79993b, the Vagrant + Ansible + Docker Swarm setup works ... most of the time. Sporadically, I get the following error:

fatal: [consul0]: FAILED! => {"changed": true, "cmd": ["docker", "run", "-d", "-p", "8500:8500", "--name=consul", "progrium/consul", "-server", "-bootstrap"], "delta": "0:00:06.046208", "end": "2016-09-01 03:51:05.527433", "failed": true, "invocation": {"module_args": {"_raw_params": "docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 125, "start": "2016-09-01 03:50:59.481225", "stderr": "Unable to find image 'progrium/consul:latest' locally\nlatest: Pulling from progrium/consul\nc862d82a67a2: Already exists\n0e7f3c08384e: Already exists\n0e221e32327a: Already exists\n09a952464e47: Already exists\n60a1b927414d: Already exists\n4c9f46b5ccce: Already exists\n417d86672aa4: Already exists\nb0d47ad24447: Pulling fs layer\nfd5300bd53f0: Pulling fs layer\na3ed95caeb02: Pulling fs layer\nd023b445076e: Pulling fs layer\nba8851f89e33: Pulling fs layer\n5d1cefca2a28: Pulling fs layer\nfd5300bd53f0: Download complete\na3ed95caeb02: Waiting\nd023b445076e: Waiting\nba8851f89e33: Waiting\n5d1cefca2a28: Waiting\nb0d47ad24447: Pull complete\nfd5300bd53f0: Pull complete\na3ed95caeb02: Verifying Checksum\na3ed95caeb02: Download complete\na3ed95caeb02: Pull complete\nd023b445076e: Verifying Checksum\nd023b445076e: Download complete\nd023b445076e: Pull complete\nba8851f89e33: Verifying Checksum\nba8851f89e33: Download complete\nba8851f89e33: Pull complete\n5d1cefca2a28: Verifying Checksum\n5d1cefca2a28: Download complete\n5d1cefca2a28: Pull complete\nDigest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274\nStatus: Image is up to date for progrium/consul:latest\ndocker: Error response from daemon: Conflict. The name \"/consul\" is already in use by container ca6b28e92181d6527f835a24d4ca47087b277f8647c95c276e1e8caeb73576f4. You have to remove (or rename) that container to be able to reuse that name..\nSee 'docker run --help'.", "stdout": "", "stdout_lines": [], "warnings": []}

NO MORE HOSTS LEFT *************************************************************
fatal: [consul0]: FAILED! => {"changed": true, "cmd": ["docker", "run", "-d", "-p", "8500:8500", "--name=consul", "progrium/consul", "-server", "-bootstrap"], "delta": "0:00:11.385375", "end": "2016-09-01 03:51:05.522137", "failed": true, "invocation": {"module_args": {"_raw_params": "docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 125, "start": "2016-09-01 03:50:54.136762", "stderr": "Unable to find image 'progrium/consul:latest' locally\nlatest: Pulling from progrium/consul\nc862d82a67a2: Pulling fs layer\n0e7f3c08384e: Pulling fs layer\n0e221e32327a: Pulling fs layer\n09a952464e47: Pulling fs layer\n60a1b927414d: Pulling fs layer\n4c9f46b5ccce: Pulling fs layer\n417d86672aa4: Pulling fs layer\nb0d47ad24447: Pulling fs layer\nfd5300bd53f0: Pulling fs layer\na3ed95caeb02: Pulling fs layer\nd023b445076e: Pulling fs layer\nba8851f89e33: Pulling fs layer\n5d1cefca2a28: Pulling fs layer\n09a952464e47: Waiting\n60a1b927414d: Waiting\n4c9f46b5ccce: Waiting\n417d86672aa4: Waiting\nb0d47ad24447: Waiting\nfd5300bd53f0: Waiting\na3ed95caeb02: Waiting\nd023b445076e: Waiting\nba8851f89e33: Waiting\n5d1cefca2a28: Waiting\n0e7f3c08384e: Verifying Checksum\n0e7f3c08384e: Download complete\n0e221e32327a: Verifying Checksum\n0e221e32327a: Download complete\nc862d82a67a2: Verifying Checksum\nc862d82a67a2: Download complete\nc862d82a67a2: Pull complete\n0e7f3c08384e: Pull complete\n0e221e32327a: Pull complete\n60a1b927414d: Verifying Checksum\n60a1b927414d: Download complete\n4c9f46b5ccce: Verifying Checksum\n4c9f46b5ccce: Download complete\n09a952464e47: Verifying Checksum\n09a952464e47: Download complete\n09a952464e47: Pull complete\n60a1b927414d: Pull complete\n4c9f46b5ccce: Pull complete\n417d86672aa4: Verifying Checksum\n417d86672aa4: Download complete\nfd5300bd53f0: Verifying Checksum\nfd5300bd53f0: Download complete\n417d86672aa4: Pull complete\nb0d47ad24447: Verifying Checksum\nb0d47ad24447: Download complete\nb0d47ad24447: Pull complete\nfd5300bd53f0: Pull complete\na3ed95caeb02: Verifying Checksum\na3ed95caeb02: Pull complete\nd023b445076e: Verifying Checksum\nd023b445076e: Pull complete\nba8851f89e33: Verifying Checksum\nba8851f89e33: Pull complete\n5d1cefca2a28: Verifying Checksum\n5d1cefca2a28: Pull complete\nDigest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274\nStatus: Image is up to date for progrium/consul:latest\ndocker: Error response from daemon: Conflict. The name \"/consul\" is already in use by container ca6b28e92181d6527f835a24d4ca47087b277f8647c95c276e1e8caeb73576f4. You have to remove (or rename) that container to be able to reuse that name..\nSee 'docker run --help'.", "stdout": "", "stdout_lines": [], "warnings": []}

NO MORE HOSTS LEFT *************************************************************

PLAY [setup swarm managers] ****************************************************

PLAY [setup swarm managers] ****************************************************
ERROR! invalid host (consul0) specified for playbook iteration
ERROR! invalid host (consul0) specified for playbook iteration
==> node0: An error occurred. The error will be shown after all tasks complete.
==> manager1: An error occurred. The error will be shown after all tasks complete.
savishy commented 8 years ago

This is likely due to some hosts being unreachable. I got

consul0                    : ok=12   changed=3    unreachable=0    failed=0   
manager0                   : ok=8    changed=0    unreachable=1    failed=0   
manager1                   : ok=17   changed=5    unreachable=0    failed=0   
node0                      : ok=2    changed=0    unreachable=1    failed=0   
node1                      : ok=9    changed=1    unreachable=0    failed=0