atosatto / ansible-dockerswarm

Docker Engine clustering using "Swarm Mode" and Ansible
https://galaxy.ansible.com/atosatto/docker-swarm/
MIT License
264 stars 148 forks source link

Getting fatal error on `Get list of labels.` task #78

Open cpxPratik opened 4 years ago

cpxPratik commented 4 years ago

The task Get list of labels. is failing after updated with ansible_fqdn on https://github.com/atosatto/ansible-dockerswarm/commit/3bb8a49297448325b8feaa9b7a899c78b2fab97e

The node hostname(staging-manager-03) on docker node ls is different from the fqdn string given on following error:

TASK [atosatto.docker-swarm : Get list of labels.] ********************************************************************************************************************************************
fatal: [165.22.48.107 -> 165.22.48.105]: FAILED! => {"changed": false, "cmd": ["docker", "inspect", "--format", "{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}", "staging-manager-03.sgp1"], "delta": "0:00:00.412684", "end": "2020-05-14 13:10:42.573599", "msg": "non-zero return code", "rc": 1, "start": "2020-05-14 13:10:42.160915", "stderr": "Error: No such object: staging-manager-03.sgp1", "stderr_lines": ["Error: No such object: staging-manager-03.sgp1"], "stdout": "", "stdout_lines": []}

For now I am using v2.2.0 which gives no error.

wombathuffer commented 4 years ago

I have the same issue except for 'ambigious' instead of not found. TASK [atosatto.docker-swarm : Get list of labels.] ******************************************************************************************************************************************* fatal: [asus.yi -> None]: FAILED! => {"changed": false, "cmd": ["docker", "inspect", "--format", "{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}", "host.domain"], " delta": "0:00:00.335281", "end": "2020-05-15 22:58:12.700418", "msg": "non-zero return code", "rc": 1, "start": "2020-05-15 22:58:12.365137", "stderr": "Error response from daemon: node host.domain is ambiguous (2 matches found)", "stderr_lines": ["Error response from daemon: node host.domain is ambiguous (2 matches found)"], "stdout": "", "stdout_lines": []}

Edit: Workaround for me was simply making the node leave. 'docker swarm leave --force'.

atosatto commented 4 years ago

Thanks @cpxPratik for reporting this issue. I'll try to reproduce this issue in a test cluster and figure our a better way of managing nodes.

Can you please confirm me the docker version you are using?

cpxPratik commented 4 years ago

@atosatto The docker version is Docker version 19.03.8, build afacb8b7f0

yukiisbored commented 4 years ago

Hello, I'm having the same issue on a cluster. It seems the node object is using hostname instead of the full FQDN.

It seems the this is the root-cause: https://github.com/atosatto/ansible-dockerswarm/commit/3bb8a49297448325b8feaa9b7a899c78b2fab97e

Though, I don't see any references in the playbook that it joins by FQDN, is this a new change on upstream docker?

yukiisbored commented 4 years ago

btw, I'm currently running version 19.03.8

FleischKarussel commented 4 years ago

Same issue here, using 19.03.6 (latest Ubuntu 18.04 provided docker.io package)

Bogdan1001 commented 4 years ago

I have same issue too. Ubuntu 18.04.

till commented 4 years ago

@atosatto We fixed this a while back but it was reverted or we mixed it up. It's inventory_hostname vs fqdn.

Bogdan1001 commented 4 years ago

Workaround for me was: replace {{ ansible_fqdn|lower }} on {{ ansible_hostname }}and from the hostname remove all dots. Was node1.connect become node1connect

gumbo2k commented 4 years ago

@atosatto We fixed this a while back but it was reverted or we mixed it up. It's inventory_hostname vs fqdn.

@till I thought the same and tried to work around by listing the hosts as fqdns in my inventory. No luck.

nununo commented 4 years ago

Hello. I'm also having this issue. Any plans to reaply the fix? Thanks!

juanluisbaptiste commented 4 years ago

I can confirm that the commit 3bb8a49 mentioned in the issue #82 is the one that breaks the labels setup, if it is reverted then the playbook finishes without issues.

joshes commented 4 years ago

Seeing the same behaviour on v2.3.0, rolling back to v2.2.0 resolves this situation.

till commented 4 years ago

Another case where this happens is the following:

I had botched my swarm setup, so it was not about node names (e.g. inventory name or fully qualified domain name (fqdn)), but the nodes were no longer seen by the manager.

The role doesn't handle this (no judgement meant) currently. I think it's a split brain/no brain kind of thing, because I had restarted my manager (and I run only one) and then this happened.

The fix was the following:

  1. get the join-token myself
  2. then force leave the workers
  3. (re-)join the manager/cluster

And then the role completes.

The other fix is to run two managers. ;-)

I am not entirely sure how this could be added to the role since the manager doesn't see the workers anymore, but the works think they are still connected. If you can afford it, trash the nodes and setup again. Maybe it's a documentation thing after all?

quadeare commented 4 years ago

Same issue on Centos 7.

For now I am using v2.2.0 which works like a charm !

juanluisbaptiste commented 4 years ago

I can confirm that the commit 3bb8a49 mentioned in the issue #82 is the one that breaks the labels setup, if it is reverted then the playbook finishes without issues.

Now I'm not sure if this has to do with this at all, as I have been getting this error several times too with that commit reverted. It always happens when I add a new instance to the cluster. First time I run this role is ok, then I create a new aws instance and run again this role to add it to the cluster and the role fails with this error. This is the error message I'm seeing being thrown by ansible on nodes that are already part of the cluster:

<10.0.10.36> (0, b'', b'')
fatal: [10.0.10.36 -> 10.0.10.36]: FAILED! => {
    "changed": false,
    "cmd": [
        "docker",
        "inspect",
        "--format",
        "{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}",
        "10"
    ],
    "delta": "0:00:00.081487",
    "end": "2020-10-15 23:41:24.604902",
    "invocation": {
        "module_args": {
            "_raw_params": "docker inspect --format '{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}' 10",
            "_uses_shell": false,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "stdin_add_newline": true,
            "strip_empty_ends": true,
            "warn": true
        }
    },
    "msg": "non-zero return code",
    "rc": 1,
    "start": "2020-10-15 23:41:24.523415",
    "stderr": "Error: No such object: 10",
    "stderr_lines": [
        "Error: No such object: 10"
    ],
    "stdout": "",
    "stdout_lines": []
}

That is the error for the manager, but the workers throw it too.

juanluisbaptiste commented 4 years ago

Same issue on Centos 7.

For now I am using v2.2.0 which works like a charm !

For me it also happens with v2.2.0 as described on my previous comment.

juanluisbaptiste commented 3 years ago

I had to use this role again and got an error when running it for the second time, and this time I noticed that the error was different to the one of this issue (and probably the error reported in my previous comment was about this new issue and not related to this one). This time the error is on the "Remove labels from swarm node" task, and it occurs when labels are configured outside this role (ie, manually adding a role to a node). I will create a separate issue for that with an accompanying PR fixing it.

juanluisbaptiste commented 3 years ago

I had to use this role again and got an error when running it for the second time, and this time I noticed that the error was different to the one of this issue (and probably the error reported in my previous comment was about this new issue and not related to this one). This time the error is on the "Remove labels from swarm node" task, and it occurs when labels are configured outside this role (ie, manually adding a role to a node). I will create a separate issue for that with an accompanying PR fixing it.

Added issue #96 for this and fixed on PR #97, I hope it gets merged (although I do not have my hopes up that it will happen hreh).