Closed kiranneo closed 8 years ago
Could you please give more information:
1: Are you trying this on vagrant boxes ? 2: Please paste the output of net_demo_installer script execution.
@kiranneo : Gentle reminder !
You can use 172.23.145.134:1 (nbv123) to debug further. Here is the output of the "net_demo_installer -r"
Parsing config file...
==== Contiv Netplugin Demo Installer ====
Netplugin Cluster will be set up on the following servers in Standalone mode:
172.23.145.136
172.23.145.135
172.23.145.134
Ready to proceed(y/n)? y
[netplugin-node] node1 ansible_ssh_host=172.23.145.136 contiv_network_mode=standalone control_interface=ens32 netplugin_if=ens34 fwd_mode=bridge node2 ansible_ssh_host=172.23.145.135 contiv_network_mode=standalone control_interface=ens32 netplugin_if=ens34 fwd_mode=bridge
In restart mode Removing containers Stopping all services
PLAY [all] *****
TASK [setup] *** ok: [node2] ok: [node1] ok: [node3]
TASK [include_vars] **** ok: [node1] => (item=contiv_network) ok: [node2] => (item=contiv_network) ok: [node3] => (item=contiv_network) ok: [node3] => (item=contiv_storage) ok: [node1] => (item=contiv_storage) ok: [node2] => (item=contiv_storage) ok: [node3] => (item=swarm) ok: [node2] => (item=swarm) ok: [node1] => (item=swarm) ok: [node3] => (item=ucp) ok: [node2] => (item=ucp) ok: [node3] => (item=docker) ok: [node3] => (item=etcd) ok: [node1] => (item=ucp) ok: [node2] => (item=docker) ok: [node1] => (item=docker) ok: [node1] => (item=etcd) ok: [node2] => (item=etcd)
TASK [include] ***** included: /home/admin05/contiv_demo/ansible/roles/ucarp/tasks/cleanup.yml for node1, node3, node2 included: /home/admin05/contiv_demo/ansible/roles/contiv_network/tasks/cleanup.yml for node1, node3, node2 included: /home/admin05/contiv_demo/ansible/roles/contiv_storage/tasks/cleanup.yml for node1, node3, node2 included: /home/admin05/contiv_demo/ansible/roles/swarm/tasks/cleanup.yml for node1, node3, node2 included: /home/admin05/contiv_demo/ansible/roles/ucp/tasks/cleanup.yml for node1, node3, node2 included: /home/admin05/contiv_demo/ansible/roles/etcd/tasks/cleanup.yml for node1, node3, node2 included: /home/admin05/contiv_demo/ansible/roles/nfs/tasks/cleanup.yml for node1, node3, node2 included: /home/admin05/contiv_demo/ansible/roles/docker/tasks/cleanup.yml for node1, node3, node2
TASK [stop ucarp] ** fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'ucarp'\": "} ...ignoring fatal: [node2]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'ucarp'\": "} ...ignoring fatal: [node3]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'ucarp'\": "} ...ignoring
TASK [stop netmaster] ** changed: [node1] changed: [node3] changed: [node2]
TASK [stop aci-gw container] *** fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'aci-gw'\": "} ...ignoring fatal: [node2]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'aci-gw'\": "} ...ignoring fatal: [node3]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'aci-gw'\": "} ...ignoring
TASK [stop netplugin] ** changed: [node1] ok: [node2] ok: [node3]
TASK [cleanup netmaster host alias] **** changed: [node1] changed: [node3] changed: [node2]
TASK [cleanup iptables for contiv network control plane] ***
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
changed: [node2] => (item=9001) changed: [node3] => (item=9001) changed: [node3] => (item=9002) changed: [node2] => (item=9002) changed: [node1] => (item=9001) changed: [node2] => (item=9003) changed: [node3] => (item=9003) changed: [node1] => (item=9002) changed: [node2] => (item=9999) changed: [node3] => (item=9999) changed: [node1] => (item=9003) changed: [node3] => (item=8080) changed: [node2] => (item=8080) changed: [node1] => (item=9999) changed: [node3] => (item=179) changed: [node2] => (item=179) changed: [node1] => (item=8080) changed: [node1] => (item=179)
TASK [include] ***** included: /home/admin05/contiv_demo/ansible/roles/contiv_network/tasks/ovs_cleanup.yml for node1, node3, node2
TASK [cleanup ovs vlan state] ** changed: [node2] changed: [node1] changed: [node3]
TASK [cleanup ovs vxlan state] ***** changed: [node1] changed: [node2] changed: [node3]
TASK [cleanup ports] *** changed: [node3] changed: [node1] changed: [node2]
TASK [debug] *** ok: [node1] => { "ports": { "changed": true, "cmd": "set -x; for p in $(ifconfig | grep vport | awk '{print $1}'); do\n ip link delete $p type veth;\n done", "delta": "0:00:00.033189", "end": "2016-07-24 00:00:02.307015", "rc": 0, "start": "2016-07-24 00:00:02.273826", "stderr": "++ ifconfig\n++ grep vport\n++ awk '{print $1}'\n+ for p in '$(ifconfig | grep vport | awk '\''{print $1}'\'')'\n+ ip link delete vvport2 type veth", "stdout": "", "stdout_lines": [], "warnings": [] } } ok: [node3] => { "ports": { "changed": true, "cmd": "set -x; for p in $(ifconfig | grep vport | awk '{print $1}'); do\n ip link delete $p type veth;\n done", "delta": "0:00:00.005102", "end": "2016-07-24 00:00:02.663980", "rc": 0, "start": "2016-07-24 00:00:02.658878", "stderr": "++ ifconfig\n++ awk '{print $1}'\n++ grep vport", "stdout": "", "stdout_lines": [], "warnings": [] } } ok: [node2] => { "ports": { "changed": true, "cmd": "set -x; for p in $(ifconfig | grep vport | awk '{print $1}'); do\n ip link delete $p type veth;\n done", "delta": "0:00:00.021284", "end": "2016-07-24 00:00:03.037224", "rc": 0, "start": "2016-07-24 00:00:03.015940", "stderr": "++ ifconfig\n++ grep vport\n++ awk '{print $1}'\n+ for p in '$(ifconfig | grep vport | awk '\''{print $1}'\'')'\n+ ip link delete vvport2 type veth", "stdout": "", "stdout_lines": [], "warnings": [] } }
TASK [deny openvswitch_t type in selinux] ** fatal: [node1]: FAILED! => {"changed": true, "cmd": "semanage permissive -d openvswitch_t", "delta": "0:00:00.001713", "end": "2016-07-24 00:00:02.598629", "failed": true, "rc": 127, "start": "2016-07-24 00:00:02.596916", "stderr": "/bin/sh: 1: semanage: not found", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring fatal: [node2]: FAILED! => {"changed": true, "cmd": "semanage permissive -d openvswitch_t", "delta": "0:00:00.001475", "end": "2016-07-24 00:00:03.337401", "failed": true, "rc": 127, "start": "2016-07-24 00:00:03.335926", "stderr": "/bin/sh: 1: semanage: not found", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring fatal: [node3]: FAILED! => {"changed": true, "cmd": "semanage permissive -d openvswitch_t", "delta": "0:00:00.001585", "end": "2016-07-24 00:00:03.020327", "failed": true, "rc": 127, "start": "2016-07-24 00:00:03.018742", "stderr": "/bin/sh: 1: semanage: not found", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring
TASK [cleanup iptables for vxlan vtep port] ****
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
changed: [node3] => (item=4789) changed: [node1] => (item=4789) changed: [node2] => (item=4789)
TASK [stop volmaster] ** fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volmaster'\": "} ...ignoring fatal: [node2]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volmaster'\": "} ...ignoring fatal: [node3]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volmaster'\": "} ...ignoring
TASK [stop volsupervisor] ** fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volsupervisor'\": "} ...ignoring fatal: [node2]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volsupervisor'\": "} ...ignoring fatal: [node3]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volsupervisor'\": "} ...ignoring
TASK [stop volplugin] ** fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volplugin'\": "} ...ignoring fatal: [node2]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volplugin'\": "} ...ignoring fatal: [node3]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'volplugin'\": "} ...ignoring
TASK [stop swarm] ** changed: [node1] changed: [node2] changed: [node3]
TASK [cleanup iptables for swarm] **
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
changed: [node1] => (item=2375) changed: [node3] => (item=2375) changed: [node2] => (item=2375)
TASK [stop ucp] **** fatal: [node2]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'ucp'\": "} ...ignoring fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'ucp'\": "} ...ignoring fatal: [node3]: FAILED! => {"changed": false, "failed": true, "msg": "systemd could not find the requested service \"'ucp'\": "} ...ignoring
TASK [cleanup ucp files from remote] ***
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
ok: [node1] => (item=ucp-fingerprint) ok: [node3] => (item=ucp-fingerprint) ok: [node2] => (item=ucp-fingerprint) ok: [node1] => (item=ucp-instance-id) ok: [node3] => (item=ucp-instance-id) ok: [node2] => (item=ucp-instance-id) ok: [node1] => (item=ucp-certificate-backup.tar) ok: [node3] => (item=ucp-certificate-backup.tar) ok: [node2] => (item=ucp-certificate-backup.tar)
TASK [cleanup ucp generated docker config file] **** ok: [node1] ok: [node2] ok: [node3]
TASK [cleanup iptables for ucp] ****
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12376 -j ACCEPT -m comment --comment \"ucp traffic (12376)\"", "delta": "0:00:00.002647", "end": "2016-07-24 00:00:09.364297", "failed": true, "item": "12376", "rc": 1, "start": "2016-07-24 00:00:09.361650", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12376 -j ACCEPT -m comment --comment \"ucp traffic (12376)\"", "delta": "0:00:00.002667", "end": "2016-07-24 00:00:09.769595", "failed": true, "item": "12376", "rc": 1, "start": "2016-07-24 00:00:09.766928", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12376 -j ACCEPT -m comment --comment \"ucp traffic (12376)\"", "delta": "0:00:00.002554", "end": "2016-07-24 00:00:10.124419", "failed": true, "item": "12376", "rc": 1, "start": "2016-07-24 00:00:10.121865", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12379 -j ACCEPT -m comment --comment \"ucp traffic (12379)\"", "delta": "0:00:00.002651", "end": "2016-07-24 00:00:09.550664", "failed": true, "item": "12379", "rc": 1, "start": "2016-07-24 00:00:09.548013", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12379 -j ACCEPT -m comment --comment \"ucp traffic (12379)\"", "delta": "0:00:00.002699", "end": "2016-07-24 00:00:09.940081", "failed": true, "item": "12379", "rc": 1, "start": "2016-07-24 00:00:09.937382", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12379 -j ACCEPT -m comment --comment \"ucp traffic (12379)\"", "delta": "0:00:00.002551", "end": "2016-07-24 00:00:10.305636", "failed": true, "item": "12379", "rc": 1, "start": "2016-07-24 00:00:10.303085", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12380 -j ACCEPT -m comment --comment \"ucp traffic (12380)\"", "delta": "0:00:00.002774", "end": "2016-07-24 00:00:09.738087", "failed": true, "item": "12380", "rc": 1, "start": "2016-07-24 00:00:09.735313", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12380 -j ACCEPT -m comment --comment \"ucp traffic (12380)\"", "delta": "0:00:00.002750", "end": "2016-07-24 00:00:10.124291", "failed": true, "item": "12380", "rc": 1, "start": "2016-07-24 00:00:10.121541", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12380 -j ACCEPT -m comment --comment \"ucp traffic (12380)\"", "delta": "0:00:00.002563", "end": "2016-07-24 00:00:10.477474", "failed": true, "item": "12380", "rc": 1, "start": "2016-07-24 00:00:10.474911", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12381 -j ACCEPT -m comment --comment \"ucp traffic (12381)\"", "delta": "0:00:00.002670", "end": "2016-07-24 00:00:09.919071", "failed": true, "item": "12381", "rc": 1, "start": "2016-07-24 00:00:09.916401", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12381 -j ACCEPT -m comment --comment \"ucp traffic (12381)\"", "delta": "0:00:00.002669", "end": "2016-07-24 00:00:10.323642", "failed": true, "item": "12381", "rc": 1, "start": "2016-07-24 00:00:10.320973", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12381 -j ACCEPT -m comment --comment \"ucp traffic (12381)\"", "delta": "0:00:00.002548", "end": "2016-07-24 00:00:10.677338", "failed": true, "item": "12381", "rc": 1, "start": "2016-07-24 00:00:10.674790", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12382 -j ACCEPT -m comment --comment \"ucp traffic (12382)\"", "delta": "0:00:00.002624", "end": "2016-07-24 00:00:10.102317", "failed": true, "item": "12382", "rc": 1, "start": "2016-07-24 00:00:10.099693", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12382 -j ACCEPT -m comment --comment \"ucp traffic (12382)\"", "delta": "0:00:00.002686", "end": "2016-07-24 00:00:10.483451", "failed": true, "item": "12382", "rc": 1, "start": "2016-07-24 00:00:10.480765", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12382 -j ACCEPT -m comment --comment \"ucp traffic (12382)\"", "delta": "0:00:00.002555", "end": "2016-07-24 00:00:10.849545", "failed": true, "item": "12382", "rc": 1, "start": "2016-07-24 00:00:10.846990", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12383 -j ACCEPT -m comment --comment \"ucp traffic (12383)\"", "delta": "0:00:00.002638", "end": "2016-07-24 00:00:10.649101", "failed": true, "item": "12383", "rc": 1, "start": "2016-07-24 00:00:10.646463", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12383 -j ACCEPT -m comment --comment \"ucp traffic (12383)\"", "delta": "0:00:00.002696", "end": "2016-07-24 00:00:10.287767", "failed": true, "item": "12383", "rc": 1, "start": "2016-07-24 00:00:10.285071", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12383 -j ACCEPT -m comment --comment \"ucp traffic (12383)\"", "delta": "0:00:00.002570", "end": "2016-07-24 00:00:11.020841", "failed": true, "item": "12383", "rc": 1, "start": "2016-07-24 00:00:11.018271", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12384 -j ACCEPT -m comment --comment \"ucp traffic (12384)\"", "delta": "0:00:00.002698", "end": "2016-07-24 00:00:10.467668", "failed": true, "item": "12384", "rc": 1, "start": "2016-07-24 00:00:10.464970", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12384 -j ACCEPT -m comment --comment \"ucp traffic (12384)\"", "delta": "0:00:00.002548", "end": "2016-07-24 00:00:11.188160", "failed": true, "item": "12384", "rc": 1, "start": "2016-07-24 00:00:11.185612", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12384 -j ACCEPT -m comment --comment \"ucp traffic (12384)\"", "delta": "0:00:00.002626", "end": "2016-07-24 00:00:10.870160", "failed": true, "item": "12384", "rc": 1, "start": "2016-07-24 00:00:10.867534", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12385 -j ACCEPT -m comment --comment \"ucp traffic (12385)\"", "delta": "0:00:00.002611", "end": "2016-07-24 00:00:10.639009", "failed": true, "item": "12385", "rc": 1, "start": "2016-07-24 00:00:10.636398", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12385 -j ACCEPT -m comment --comment \"ucp traffic (12385)\"", "delta": "0:00:00.002519", "end": "2016-07-24 00:00:11.375589", "failed": true, "item": "12385", "rc": 1, "start": "2016-07-24 00:00:11.373070", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12385 -j ACCEPT -m comment --comment \"ucp traffic (12385)\"", "delta": "0:00:00.002776", "end": "2016-07-24 00:00:11.063625", "failed": true, "item": "12385", "rc": 1, "start": "2016-07-24 00:00:11.060849", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12386 -j ACCEPT -m comment --comment \"ucp traffic (12386)\"", "delta": "0:00:00.002617", "end": "2016-07-24 00:00:10.832478", "failed": true, "item": "12386", "rc": 1, "start": "2016-07-24 00:00:10.829861", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12386 -j ACCEPT -m comment --comment \"ucp traffic (12386)\"", "delta": "0:00:00.002508", "end": "2016-07-24 00:00:11.568290", "failed": true, "item": "12386", "rc": 1, "start": "2016-07-24 00:00:11.565782", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 12386 -j ACCEPT -m comment --comment \"ucp traffic (12386)\"", "delta": "0:00:00.003065", "end": "2016-07-24 00:00:11.251658", "failed": true, "item": "12386", "rc": 1, "start": "2016-07-24 00:00:11.248593", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 2375 -j ACCEPT -m comment --comment \"ucp traffic (2375)\"", "delta": "0:00:00.002576", "end": "2016-07-24 00:00:11.017269", "failed": true, "item": "2375", "rc": 1, "start": "2016-07-24 00:00:11.014693", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 2375 -j ACCEPT -m comment --comment \"ucp traffic (2375)\"", "delta": "0:00:00.002500", "end": "2016-07-24 00:00:11.768121", "failed": true, "item": "2375", "rc": 1, "start": "2016-07-24 00:00:11.765621", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 2375 -j ACCEPT -m comment --comment \"ucp traffic (2375)\"", "delta": "0:00:00.002731", "end": "2016-07-24 00:00:11.446125", "failed": true, "item": "2375", "rc": 1, "start": "2016-07-24 00:00:11.443394", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 2376 -j ACCEPT -m comment --comment \"ucp traffic (2376)\"", "delta": "0:00:00.002491", "end": "2016-07-24 00:00:11.183208", "failed": true, "item": "2376", "rc": 1, "start": "2016-07-24 00:00:11.180717", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 2376 -j ACCEPT -m comment --comment \"ucp traffic (2376)\"", "delta": "0:00:00.002549", "end": "2016-07-24 00:00:11.971097", "failed": true, "item": "2376", "rc": 1, "start": "2016-07-24 00:00:11.968548", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 2376 -j ACCEPT -m comment --comment \"ucp traffic (2376)\"", "delta": "0:00:00.002755", "end": "2016-07-24 00:00:11.641419", "failed": true, "item": "2376", "rc": 1, "start": "2016-07-24 00:00:11.638664", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} failed: node1 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 443 -j ACCEPT -m comment --comment \"ucp traffic (443)\"", "delta": "0:00:00.004507", "end": "2016-07-24 00:00:11.338955", "failed": true, "item": "443", "rc": 1, "start": "2016-07-24 00:00:11.334448", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring failed: node3 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 443 -j ACCEPT -m comment --comment \"ucp traffic (443)\"", "delta": "0:00:00.002500", "end": "2016-07-24 00:00:11.878527", "failed": true, "item": "443", "rc": 1, "start": "2016-07-24 00:00:11.876027", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring failed: node2 => {"changed": true, "cmd": "iptables -D INPUT -p tcp --dport 443 -j ACCEPT -m comment --comment \"ucp traffic (443)\"", "delta": "0:00:00.002576", "end": "2016-07-24 00:00:12.223021", "failed": true, "item": "443", "rc": 1, "start": "2016-07-24 00:00:12.220445", "stderr": "iptables: Bad rule (does a matching rule exist in that chain?).", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring
TASK [stop etcd] *** changed: [node2] changed: [node3] changed: [node1]
TASK [cleanup iptables for etcd] ***
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
changed: [node1] => (item=2379) changed: [node3] => (item=2379) changed: [node2] => (item=2379) changed: [node1] => (item=4001) changed: [node3] => (item=4001) changed: [node2] => (item=4001) changed: [node1] => (item=2380) changed: [node3] => (item=2380) changed: [node2] => (item=2380) changed: [node1] => (item=7001) changed: [node3] => (item=7001) changed: [node2] => (item=7001)
TASK [stop nfs services (redhat)] **
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
failed: node1 => {"failed": true, "item": "rpcbind", "msg": "systemd could not find the requested service \"'rpcbind'\": "} failed: node2 => {"failed": true, "item": "rpcbind", "msg": "systemd could not find the requested service \"'rpcbind'\": "} failed: node3 => {"failed": true, "item": "rpcbind", "msg": "systemd could not find the requested service \"'rpcbind'\": "} failed: node1 => {"failed": true, "item": "nfs-server", "msg": "systemd could not find the requested service \"'nfs-server'\": "} failed: node2 => {"failed": true, "item": "nfs-server", "msg": "systemd could not find the requested service \"'nfs-server'\": "} failed: node3 => {"failed": true, "item": "nfs-server", "msg": "systemd could not find the requested service \"'nfs-server'\": "} failed: node1 => {"failed": true, "item": "rpc-statd", "msg": "systemd could not find the requested service \"'rpc-statd'\": "} failed: node3 => {"failed": true, "item": "rpc-statd", "msg": "systemd could not find the requested service \"'rpc-statd'\": "} failed: node1 => {"failed": true, "item": "nfs-idmapd", "msg": "systemd could not find the requested service \"'nfs-idmapd'\": "} failed: node2 => {"failed": true, "item": "rpc-statd", "msg": "systemd could not find the requested service \"'rpc-statd'\": "} ...ignoring failed: node3 => {"failed": true, "item": "nfs-idmapd", "msg": "systemd could not find the requested service \"'nfs-idmapd'\": "} ...ignoring failed: node2 => {"failed": true, "item": "nfs-idmapd", "msg": "systemd could not find the requested service \"'nfs-idmapd'\": "} ...ignoring
TASK [stop nfs services (debian)] **
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
failed: node1 => {"failed": true, "item": "nfs-kernel-server", "msg": "systemd could not find the requested service \"'nfs-kernel-server'\": "} failed: node3 => {"failed": true, "item": "nfs-kernel-server", "msg": "systemd could not find the requested service \"'nfs-kernel-server'\": "} failed: node2 => {"failed": true, "item": "nfs-kernel-server", "msg": "systemd could not find the requested service \"'nfs-kernel-server'\": "} failed: node1 => {"failed": true, "item": "nfs-common", "msg": "systemd could not find the requested service \"'nfs-common'\": "} failed: node3 => {"failed": true, "item": "nfs-common", "msg": "systemd could not find the requested service \"'nfs-common'\": "} ...ignoring ...ignoring failed: node2 => {"failed": true, "item": "nfs-common", "msg": "systemd could not find the requested service \"'nfs-common'\": "} ...ignoring
TASK [stop docker] ***** changed: [node2] changed: [node1] changed: [node3]
TASK [stop docker tcp socket] ** changed: [node1] changed: [node3] changed: [node2]
TASK [cleanup iptables for docker] *****
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
[WARNING]: The loop variable 'item' is already in use. You should set the loop_var
value in the loop_control
option for the task to something else to avoid variable collisions and unexpected
behavior.
changed: [node1] => (item=2385) changed: [node3] => (item=2385) changed: [node2] => (item=2385)
PLAY RECAP *****
node1 : ok=39 changed=15 unreachable=0 failed=0
node2 : ok=39 changed=14 unreachable=0 failed=0
node3 : ok=39 changed=14 unreachable=0 failed=0
Setting up services on nodes [DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and make sure become_method is 'sudo' (default). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
PLAY [devtest] ***** skipping: no hosts matched
PLAY [volplugin-test] ** skipping: no hosts matched
PLAY [cluster-node] **** skipping: no hosts matched
PLAY [cluster-control] ***** skipping: no hosts matched
PLAY [service-master] ** skipping: no hosts matched
PLAY [service-worker] ** skipping: no hosts matched
PLAY [netplugin-node] **
TASK [setup] *** ok: [node2] ok: [node1] ok: [node3]
TASK [base : include] ** included: /home/admin05/contiv_demo/ansible/roles/base/tasks/ubuntu_tasks.yml for node3, node1, node2
TASK [base : upgrade system (debian)] ** ok: [node1] ok: [node2] ok: [node3]
TASK [base : install base packages (debian)] *** ok: [node3] => (item=[u'ntp', u'unzip', u'bzip2', u'curl', u'python-software-properties', u'bash-completion', u'python-selinux', u'e2fsprogs', u'openssh-server']) ok: [node2] => (item=[u'ntp', u'unzip', u'bzip2', u'curl', u'python-software-properties', u'bash-completion', u'python-selinux', u'e2fsprogs', u'openssh-server']) ok: [node1] => (item=[u'ntp', u'unzip', u'bzip2', u'curl', u'python-software-properties', u'bash-completion', u'python-selinux', u'e2fsprogs', u'openssh-server'])
TASK [base : include] ** skipping: [node1] skipping: [node3] skipping: [node2]
TASK [base : include] ** included: /home/admin05/contiv_demo/ansible/roles/base/tasks/os_agnostic_tasks.yml for node1, node2, node3
TASK [base : download consul binary] *** ok: [node2] ok: [node1] ok: [node3]
TASK [base : install consul] *** changed: [node2] changed: [node1] changed: [node3]
TASK [docker : check docker version] *** changed: [node1] changed: [node3] changed: [node2]
TASK [docker : create docker daemon's config directory] **** ok: [node3] ok: [node1] ok: [node2]
TASK [docker : setup docker daemon's environment] ** ok: [node1] ok: [node3] ok: [node2]
TASK [docker : include] **** skipping: [node3] skipping: [node1] skipping: [node2]
TASK [docker : include] **** skipping: [node1] skipping: [node3] skipping: [node2]
TASK [docker : setup iptables for docker] ** changed: [node3] => (item=2385) changed: [node1] => (item=2385) changed: [node2] => (item=2385)
TASK [docker : copy systemd units for docker(enable cluster store) (debian)] *** ok: [node1] ok: [node3] ok: [node2]
TASK [docker : copy systemd units for docker(enable cluster store) (redhat)] *** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [docker : check docker-tcp socket state] ** changed: [node1] changed: [node3] changed: [node2]
TASK [docker : include] **** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [docker : copy systemd units for docker tcp socket settings] ** ok: [node1] ok: [node3] ok: [node2]
TASK [docker : reload systemd configuration] *** changed: [node1] changed: [node2] changed: [node3]
TASK [docker : stop docker] **** ok: [node1] ok: [node3] ok: [node2]
TASK [docker : start docker-tcp service] *** changed: [node2] changed: [node1] changed: [node3]
TASK [docker : check docker service state] ***** changed: [node1] changed: [node3] changed: [node2]
TASK [docker : remove the docker key file, if any. It shall be regenerated by docker on restart] *** changed: [node1] changed: [node2] changed: [node3]
TASK [docker : reload docker systemd configuration] **** changed: [node1] changed: [node2] changed: [node3]
TASK [docker : restart docker (first time)] **** fatal: [node1]: FAILED! => {"failed": true, "msg": "The conditional check 'thin_provisioned|changed' failed. The error was: |changed expects a dictionary\n\nThe error appears to have been in '/home/admin05/contiv_demo/ansible/roles/docker/tasks/main.yml': line 98, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# of some docker bug I've not investigated.\n- name: restart docker (first time)\n ^ here\n"} ...ignoring fatal: [node3]: FAILED! => {"failed": true, "msg": "The conditional check 'thin_provisioned|changed' failed. The error was: |changed expects a dictionary\n\nThe error appears to have been in '/home/admin05/contiv_demo/ansible/roles/docker/tasks/main.yml': line 98, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# of some docker bug I've not investigated.\n- name: restart docker (first time)\n ^ here\n"} ...ignoring fatal: [node2]: FAILED! => {"failed": true, "msg": "The conditional check 'thin_provisioned|changed' failed. The error was: |changed expects a dictionary\n\nThe error appears to have been in '/home/admin05/contiv_demo/ansible/roles/docker/tasks/main.yml': line 98, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# of some docker bug I've not investigated.\n- name: restart docker (first time)\n ^ here\n"} ...ignoring
TASK [docker : ensure docker is started] *** changed: [node1] changed: [node2] changed: [node3]
TASK [docker : stat] *** ok: [node1] ok: [node3] ok: [node2]
TASK [docker : Import saved docker images] ***** skipping: [node1] skipping: [node3] skipping: [node2]
TASK [docker : check docker-compose version] *** changed: [node3] changed: [node1] changed: [node2]
TASK [docker : download and install docker-compose] **** skipping: [node1] skipping: [node2] ok: [node3]
TASK [docker : check contiv-compose version] *** changed: [node1] changed: [node3] changed: [node2]
TASK [docker : download contiv-compose] **** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [docker : install contiv-compose] ***** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [etcd : download etcdctl v2.3.1] ** ok: [node1] ok: [node2] ok: [node3]
TASK [etcd : install etcdctl] ** changed: [node1] [WARNING]: Consider using unarchive module rather than running tar
changed: [node3] changed: [node2]
TASK [etcd : install etcd v2.3.1] ** changed: [node2] changed: [node1] changed: [node3]
TASK [etcd : setup iptables for etcd] ** changed: [node1] => (item=2379) changed: [node3] => (item=2379) changed: [node1] => (item=4001) changed: [node3] => (item=4001) changed: [node1] => (item=2380) changed: [node3] => (item=2380) changed: [node1] => (item=7001) changed: [node3] => (item=7001) changed: [node2] => (item=2379) changed: [node2] => (item=4001) changed: [node2] => (item=2380) changed: [node2] => (item=7001)
TASK [etcd : copy the etcd start/stop script] ** changed: [node1] changed: [node2] changed: [node3]
TASK [etcd : copy systemd units for etcd] ** ok: [node1] ok: [node3] ok: [node2]
TASK [etcd : start etcd] *** changed: [node1] changed: [node2] changed: [node3]
TASK [swarm : check for swarm image] *** changed: [node3] changed: [node2] changed: [node1]
TASK [swarm : download swarm container image] ** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [swarm : setup iptables for swarm] **** changed: [node1] => (item=2375) changed: [node2] => (item=2375) changed: [node3] => (item=2375)
TASK [swarm : copy the swarm start/stop script] **** ok: [node1] ok: [node3] ok: [node2]
TASK [swarm : copy systemd units for swarm] **** ok: [node1] ok: [node3] ok: [node2]
TASK [swarm : start swarm] ***** changed: [node2] changed: [node3] changed: [node1]
TASK [ucp : download and install ucp images] *** skipping: [node1] skipping: [node3] skipping: [node2]
TASK [ucp : setup iptables for ucp] **** skipping: [node1] => (item=12376) skipping: [node1] => (item=12379) skipping: [node1] => (item=12380) skipping: [node2] => (item=12376) skipping: [node1] => (item=12381) skipping: [node3] => (item=12376) skipping: [node2] => (item=12379) skipping: [node3] => (item=12379) skipping: [node1] => (item=12382) skipping: [node3] => (item=12380) skipping: [node2] => (item=12380) skipping: [node3] => (item=12381) skipping: [node1] => (item=12383) skipping: [node3] => (item=12382) skipping: [node2] => (item=12381) skipping: [node3] => (item=12383) skipping: [node3] => (item=12384) skipping: [node1] => (item=12384) skipping: [node1] => (item=12385) skipping: [node2] => (item=12382) skipping: [node3] => (item=12385) skipping: [node1] => (item=12386) skipping: [node2] => (item=12383) skipping: [node3] => (item=12386) skipping: [node3] => (item=2375) skipping: [node1] => (item=2375) skipping: [node3] => (item=2376) skipping: [node2] => (item=12384) skipping: [node3] => (item=443) skipping: [node1] => (item=2376) skipping: [node2] => (item=12385) skipping: [node1] => (item=443) skipping: [node2] => (item=12386) skipping: [node2] => (item=2375) skipping: [node2] => (item=2376) skipping: [node2] => (item=443)
TASK [ucp : copy the ucp license to the remote machine] **** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [ucp : copy the ucp start/stop script] **** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [ucp : copy systemd units for ucp] **** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [ucp : start ucp] ***** skipping: [node1] skipping: [node3] skipping: [node2]
TASK [ucp : create a local fetch directory if it doesn't exist] **** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [ucp : wait for ucp files to be created, which ensures the service has started] *** skipping: [node1] => (item=ucp-fingerprint) skipping: [node1] => (item=ucp-instance-id) skipping: [node1] => (item=ucp-certificate-backup.tar) skipping: [node2] => (item=ucp-fingerprint) skipping: [node3] => (item=ucp-fingerprint) skipping: [node3] => (item=ucp-instance-id) skipping: [node2] => (item=ucp-instance-id) skipping: [node3] => (item=ucp-certificate-backup.tar) skipping: [node2] => (item=ucp-certificate-backup.tar)
TASK [ucp : fetch the ucp files from master nodes] ***** skipping: [node1] => (item=ucp-fingerprint) skipping: [node1] => (item=ucp-instance-id) skipping: [node1] => (item=ucp-certificate-backup.tar) skipping: [node2] => (item=ucp-fingerprint) skipping: [node3] => (item=ucp-fingerprint) skipping: [node2] => (item=ucp-instance-id) skipping: [node3] => (item=ucp-instance-id) skipping: [node2] => (item=ucp-certificate-backup.tar) skipping: [node3] => (item=ucp-certificate-backup.tar)
TASK [ucp : copy the ucp files to replicas and worker nodes] *** skipping: [node1] => (item=ucp-fingerprint) skipping: [node1] => (item=ucp-instance-id) skipping: [node1] => (item=ucp-certificate-backup.tar) skipping: [node3] => (item=ucp-fingerprint) skipping: [node2] => (item=ucp-fingerprint) skipping: [node2] => (item=ucp-instance-id) skipping: [node3] => (item=ucp-instance-id) skipping: [node2] => (item=ucp-certificate-backup.tar) skipping: [node3] => (item=ucp-certificate-backup.tar)
TASK [contiv_network : check dns container image] ** changed: [node1] changed: [node3] changed: [node2]
TASK [contiv_network : pull dns container image] *** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [contiv_network : include] **** included: /home/admin05/contiv_demo/ansible/roles/contiv_network/tasks/ovs.yml for node1, node2, node3
TASK [contiv_network : download ovs binaries (redhat)] ***** skipping: [node3] => (item={u'url': u'https://cisco.box.com/shared/static/zzmpe1zesdpf270k9pml40rlm4o8fs56.rpm', u'dest': u'/tmp/openvswitch-2.3.1-2.el7.x86_64.rpm'}) skipping: [node1] => (item={u'url': u'https://cisco.box.com/shared/static/zzmpe1zesdpf270k9pml40rlm4o8fs56.rpm', u'dest': u'/tmp/openvswitch-2.3.1-2.el7.x86_64.rpm'}) skipping: [node2] => (item={u'url': u'https://cisco.box.com/shared/static/zzmpe1zesdpf270k9pml40rlm4o8fs56.rpm', u'dest': u'/tmp/openvswitch-2.3.1-2.el7.x86_64.rpm'})
TASK [contiv_network : install ovs (redhat)] *** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [contiv_network : download ovs binaries (debian)] ***** ok: [node1] => (item={u'url': u'https://cisco.box.com/shared/static/v1dvgoboo5zgqrtn6tu27vxeqtdo2bdl.deb', u'dest': u'/tmp/ovs-common.deb'}) ok: [node2] => (item={u'url': u'https://cisco.box.com/shared/static/v1dvgoboo5zgqrtn6tu27vxeqtdo2bdl.deb', u'dest': u'/tmp/ovs-common.deb'}) ok: [node3] => (item={u'url': u'https://cisco.box.com/shared/static/v1dvgoboo5zgqrtn6tu27vxeqtdo2bdl.deb', u'dest': u'/tmp/ovs-common.deb'}) ok: [node1] => (item={u'url': u'https://cisco.box.com/shared/static/ymbuwvt2qprs4tquextw75b82hyaxwon.deb', u'dest': u'/tmp/ovs-switch.deb'}) ok: [node2] => (item={u'url': u'https://cisco.box.com/shared/static/ymbuwvt2qprs4tquextw75b82hyaxwon.deb', u'dest': u'/tmp/ovs-switch.deb'}) ok: [node3] => (item={u'url': u'https://cisco.box.com/shared/static/ymbuwvt2qprs4tquextw75b82hyaxwon.deb', u'dest': u'/tmp/ovs-switch.deb'})
TASK [contiv_network : install ovs-common (debian)] **** ok: [node2] ok: [node1] ok: [node3]
TASK [contiv_network : install ovs (debian)] *** ok: [node2] ok: [node1] ok: [node3]
TASK [contiv_network : start ovs service] ** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [contiv_network : setup ovs] ** changed: [node1] => (item=tcp:127.0.0.1:6640) changed: [node2] => (item=tcp:127.0.0.1:6640) changed: [node3] => (item=tcp:127.0.0.1:6640) changed: [node1] => (item=ptcp:6640) changed: [node2] => (item=ptcp:6640) changed: [node3] => (item=ptcp:6640)
TASK [contiv_network : check selinux status] *** skipping: [node1] skipping: [node2] skipping: [node3]
TASK [contiv_network : permit openvswitch_t type in selinux] *** skipping: [node1] skipping: [node3] skipping: [node2]
TASK [contiv_network : setup iptables for vxlan vtep port] ***** changed: [node1] => (item=4789) changed: [node3] => (item=4789) changed: [node2] => (item=4789)
TASK [contiv_network : setup iptables for contiv network control plane] **** changed: [node1] => (item=9001) changed: [node3] => (item=9001) changed: [node2] => (item=9001) changed: [node1] => (item=9002) changed: [node3] => (item=9002) changed: [node2] => (item=9002) changed: [node1] => (item=9003) changed: [node2] => (item=9003) changed: [node3] => (item=9003) changed: [node1] => (item=9999) changed: [node2] => (item=9999) changed: [node3] => (item=9999) changed: [node1] => (item=8080) changed: [node3] => (item=8080) changed: [node2] => (item=8080) changed: [node1] => (item=179) changed: [node3] => (item=179) changed: [node2] => (item=179)
TASK [contiv_network : download netmaster and netplugin] *** ok: [node1] ok: [node2] ok: [node3]
TASK [contiv_network : ensure netplugin directory exists] ** ok: [node1] ok: [node3] ok: [node2]
TASK [contiv_network : install netmaster and netplugin] **** changed: [node3] changed: [node1] changed: [node2]
TASK [contiv_network : create links for netplugin binaries] **** ok: [node1] => (item=netctl) ok: [node2] => (item=netctl) ok: [node3] => (item=netctl) ok: [node1] => (item=netmaster) ok: [node3] => (item=netmaster) ok: [node2] => (item=netmaster) ok: [node1] => (item=netplugin) ok: [node2] => (item=netplugin) ok: [node3] => (item=netplugin) ok: [node1] => (item=contivk8s) ok: [node2] => (item=contivk8s) ok: [node3] => (item=contivk8s)
TASK [contiv_network : copy environment file for netplugin] **** ok: [node1] ok: [node3] ok: [node2]
TASK [contiv_network : copy systemd units for netplugin] *** ok: [node1] ok: [node3] ok: [node2]
TASK [contiv_network : copy bash auto complete file for netctl] **** ok: [node1] ok: [node2] ok: [node3]
TASK [contiv_network : start netplugin] **** changed: [node1] changed: [node2] changed: [node3]
TASK [contiv_network : setup netmaster host alias] ***** changed: [node2] changed: [node1] changed: [node3]
TASK [contiv_network : setup hostname alias] *** ok: [node3] => (item={u'regexp': u'^127.0.0.1', u'line': u'127.0.0.1 localhost'}) ok: [node2] => (item={u'regexp': u'^127.0.0.1', u'line': u'127.0.0.1 localhost'}) ok: [node1] => (item={u'regexp': u'^127.0.0.1', u'line': u'127.0.0.1 localhost'}) ok: [node2] => (item={u'regexp': u' nxmonit-05-135$', u'line': u'172.23.145.135 nxmonit-05-135'}) ok: [node1] => (item={u'regexp': u' nxmonit-05-136$', u'line': u'172.23.145.136 nxmonit-05-136'}) ok: [node3] => (item={u'regexp': u' nxmonit-05-134$', u'line': u'172.23.145.134 nxmonit-05-134'})
TASK [contiv_network : copy environment file for netmaster] **** ok: [node1] ok: [node2] ok: [node3]
TASK [contiv_network : copy systemd units for netmaster] *** ok: [node2] ok: [node1] ok: [node3]
TASK [contiv_network : start netmaster] **** changed: [node2] changed: [node1] changed: [node3]
TASK [contiv_network : download contivctl] ***** ok: [node2] ok: [node1] ok: [node3]
TASK [contiv_network : install contivctl] ** changed: [node1] changed: [node2] changed: [node3]
TASK [contiv_network : include] **** skipping: [node1] skipping: [node2] skipping: [node3]
PLAY RECAP *****
node1 : ok=60 changed=29 unreachable=0 failed=0
node2 : ok=60 changed=29 unreachable=0 failed=0
node3 : ok=61 changed=29 unreachable=0 failed=0
++ sudoExec docker -H unix:///var/run/docker.sock ps --no-trunc -f name=swarm-manager ++ grep '--advertise=[0-9,.]:[0-9]' -o ++ sudo -E docker -H unix:///var/run/docker.sock ps --no-trunc -f name=swarm-manager ++ awk -F= '{print $2}'
Could you please restart the script ? It says swarm cluster did not form. Also could you pleas paste your cfg.yml file ?
you can login to the VNC: 172.23.145.134:1 to check all the details from Cisco network.
It worked when I ran the script on ./net_demo_installer without the "-r" option.
admin05@nxmonit-05-136:~/contiv_demo$ cat cfg.yml
CONNECTION_INFO: 172.23.145.134: control: ens32 data: ens34 172.23.145.135: control: ens32 data: ens34 172.23.145.136: control: ens32 data: ens34
admin05@nxmonit-05-135:~$ netctl network ls ERRO[0000] Get http://netmaster:9999/api/networks/: dial tcp: lookup netmaster on 171.70.168.183:53: no such host admin05@nxmonit-05-135:~$
I have brought the contiv cluster manually due to the OVS issue with ./net_demo_installer. Not sure if some of the dependencies are not met.
admin05@nxmonit-05-135:~$ sudo docker version Client: Version: 1.11.1 API version: 1.23 Go version: go1.5.4 Git commit: 5604cbe Built: Tue Apr 26 23:38:55 2016 OS/Arch: linux/amd64
Server: Version: 1.11.1 API version: 1.23 Go version: go1.5.4 Git commit: 5604cbe Built: Tue Apr 26 23:38:55 2016 OS/Arch: linux/amd64 admin05@nxmonit-05-135:~$
admin05@nxmonit-05-135:~$ etcdctl cluster-health member 493c117890e44f30 is healthy: got healthy result from http://172.23.145.135:2379 member 70448b100ff5839a is healthy: got healthy result from http://172.23.145.136:2379 member 70559519761b575d is healthy: got healthy result from http://172.23.145.134:2379 cluster is healthy admin05@nxmonit-05-135:~$