osism / testbed

With this testbed, it is possible to run a full OSISM installation, the baseline of the Sovereign Cloud Stack, on an existing OpenStack environment such as Cleura or REGIO.cloud.
https://osism.tech/docs/guides/other-guides/testbed
Apache License 2.0
61 stars 26 forks source link

Testbad is failing in my openstack cloud #152

Closed ghost closed 4 years ago

ghost commented 4 years ago

Hi berendt & team,

First of all, thank you very much for your effort and this amazing project of yours.

I have 3 bare metal servers (16 CPU, 64GB RAM) with openstack deployed on top of them via kolla-ansible. Thus, I ve decided to test and see how it goes with your testbed deployment on top of my openstack. So far so good, however, I ve been able to deploy everything except openstack :)

What happened:

I started:

openstack` --os-cloud testbed \
  stack create \
  -e heat/environment.yml \
  --parameter deploy_ceph=true \
  --parameter deploy_infrastructure=true \
  --timeout 150 \
  -t heat/stack.yml testbed

As you can see I ve used only ceph and infra deployment therefore, my heat stack was finished complete.

+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name | Project                          | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| a7bca5cc-e36a-4f6a-8408-193aca23c93b | testbed    | 3ef77e22e984498d98841e14bf6dfc08 | CREATE_COMPLETE | 2020-05-02T18:54:47Z | None         |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+

Before that I tried to deploy everything at once and it failed hence I decided that I am going to run openstack deployment script manually. So after stack has been deployed, I ssh'd into manager and ran: /opt/configuration/scripts/deploy_openstack_services_basic.sh

It failed with the following errors:

TASK [keystone : Check keystone containers] ************************************
changed: [testbed-node-1.osism.local] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone:train-latest', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5000', 'listen_port': '5000'}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'port': '5000', 'listen_port': '5000'}, 'keystone_admin': {'enabled': True, 'mode': 'http', 'external': False, 'port': '35357', 'listen_port': '35357'}}}})
changed: [testbed-node-0.osism.local] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone:train-latest', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5000', 'listen_port': '5000'}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'port': '5000', 'listen_port': '5000'}, 'keystone_admin': {'enabled': True, 'mode': 'http', 'external': False, 'port': '35357', 'listen_port': '35357'}}}})
changed: [testbed-node-1.osism.local] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-ssh:train-latest', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}})
changed: [testbed-node-0.osism.local] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-ssh:train-latest', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}})
changed: [testbed-node-1.osism.local] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-fernet:train-latest', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}})
changed: [testbed-node-0.osism.local] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-fernet:train-latest', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}})

TASK [keystone : include_tasks] ************************************************
skipping: [testbed-node-0.osism.local]
skipping: [testbed-node-1.osism.local]

TASK [keystone : include_tasks] ************************************************
included: /ansible/roles/keystone/tasks/bootstrap.yml for testbed-node-0.osism.local, testbed-node-1.osism.local

TASK [keystone : Creating keystone database] ***********************************
fatal: [testbed-node-0.osism.local -> 192.168.40.10]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is not running."}

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
testbed-manager.osism.local : ok=9    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
testbed-node-0.osism.local : ok=24   changed=1    unreachable=0    failed=1    skipped=8    rescued=0    ignored=0
testbed-node-1.osism.local : ok=22   changed=1    unreachable=0    failed=0    skipped=7    rescued=0    ignored=0
testbed-node-2.osism.local : ok=9    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

PLAY [Prepare masquerading on the manager node] ********************************

TASK [Accpet FORWARD on the management interface (incoming)] *******************
ok: [testbed-manager.osism.local]

TASK [Accept FORWARD on the management interface (outgoing)] *******************
ok: [testbed-manager.osism.local]

TASK [Masquerade traffic on the management interface] **************************
ok: [testbed-manager.osism.local]

PLAY [Bootstrap basic OpenStack services] **************************************

TASK [Create test project] *****************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Failed to discover available identity versions when contacting http://api.osism.local:5000/v3. Attempting to parse version from URL.\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 160, in _new_conn\n    (self._dns_host, self.port), self.timeout, **extra_kw\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 84, in create_connection\n    raise err\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 74, in create_connection\n    sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 677, in urlopen\n    chunked=chunked,\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 392, in _make_request\n    conn.request(method, url, **httplib_request_kw)\n  File \"/usr/lib/python3.6/http/client.py\", line 1264, in request\n    self._send_request(method, url, body, headers, encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1310, in _send_request\n    self.endheaders(body, encode_chunked=encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1259, in endheaders\n    self._send_output(message_body, encode_chunked=encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1038, in _send_output\n    self.send(msg)\n  File \"/usr/lib/python3.6/http/client.py\", line 976, in send\n    self.connect()\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 187, in connect\n    conn = self._new_conn()\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 172, in _new_conn\n    self, \"Failed to establish a new connection: %s\" % e\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f0765c2e7b8>: Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 449, in send\n    timeout=timeout\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 725, in urlopen\n    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py\", line 439, in increment\n    raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0765c2e7b8>: Failed to establish a new connection: [Errno 113] No route to host',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1004, in _send_request\n    resp = self.session.request(method, url, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 530, in request\n    resp = self.send(prep, **send_kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 643, in send\n    r = adapter.send(request, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 516, in send\n    raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0765c2e7b8>: Failed to establish a new connection: [Errno 113] No route to host',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_os_project_payload_2snpvh5o/ansible_os_project_payload.zip/ansible/modules/cloud/openstack/os_project.py\", line 211, in <module>\n  File \"/tmp/ansible_os_project_payload_2snpvh5o/ansible_os_project_payload.zip/ansible/modules/cloud/openstack/os_project.py\", line 174, in main\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 99, in get_project\n    domain_id=domain_id)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_utils.py\", line 205, in _get_entity\n    entities = search(name_or_id, filters, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 84, in search_projects\n    domain_id=domain_id, name_or_id=name_or_id, filters=filters)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 56, in list_projects\n    if self._is_client_version('identity', 3):\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/openstackcloud.py\", line 461, in _is_client_version\n    client = getattr(self, client_name)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n    'identity', min_version=2, max_version='3.latest')\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/openstackcloud.py\", line 408, in _get_versioned_client\n    if adapter.get_endpoint():\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/adapter.py\", line 282, in get_endpoint\n    return self.session.get_endpoint(auth or self.auth, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1225, in get_endpoint\n    return auth.get_endpoint(self, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n    allow_version_hack=allow_version_hack, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n    service_catalog = self.get_access(session).service_catalog\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n    self.auth_ref = self.get_auth_ref(session)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/generic/base.py\", line 208, in get_auth_ref\n    return self._plugin.get_auth_ref(session, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/v3/base.py\", line 184, in get_auth_ref\n    authenticated=False, log=False, **rkwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1131, in post\n    return self.request(url, 'POST', **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 913, in request\n    resp = send(**kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1020, in _send_request\n    raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://api.osism.local:5000/v3/auth/tokens: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0765c2e7b8>: Failed to establish a new connection: [Errno 113] No route to host',))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
testbed-manager.osism.local : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Then I ssh'd into one of the nodes and noticed that both keepalived and haproxy are restarting with errors:

docker logs haproxy
standard_init_linux.go:211: exec user process caused "exec format error"
standard_init_linux.go:211: exec user process caused "exec format error"

docker logs keepalived
standard_init_linux.go:211: exec user process caused "exec format error"
standard_init_linux.go:211: exec user process caused "exec format error"

Thus, I could not figure out why they were failing and haproxy cannot implement virtual ip. Maybe it somehow related that this deployment is running inside another openstack :) Nevertheless, ceph is running fine and nodes setup was finished successfully I guess.

My environment file:

---
parameter_defaults:
  availability_zone: nova
  volume_availability_zone: nova
  network_availability_zone: nova
  flavor_node: 4C-16GB-40GB
  flavor_manager: 4C-4GB-20GB
  image: bionic
  public: external
  volume_size_storage: 10
  ceph_version: octopus
  openstack_version: train
  configuration_version: master

Please let me know if you can help me out to troubleshoot that and if you need any additional info on that.

Thank you & Regards

berendt commented 4 years ago

Thanks for the issue. The latest OpenStack Train images on Quay are currently broken. New working images should be available tomorrow during the day.

ghost commented 4 years ago

Thanks for the issue. The latest OpenStack Train images on Quay are currently broken. New working images should be available tomorrow during the day.

Okay, I will try again in the evening then. Thank you

berendt commented 4 years ago

Okay, I will try again in the evening then. Thank you

Train Latest Images are usable again. Please test.

ghost commented 4 years ago

Okay, I will try again in the evening then. Thank you

Train Latest Images are usable again. Please test.

Hi Christian,

This time I ran:

openstack --os-cloud testbed \
  stack create \
  -e heat/environment.yml \
  --parameter deploy_ceph=true \
  --parameter deploy_infrastructure=true \
  --parameter deploy_openstack=true \
  --timeout 150 \
  -t heat/stack.yml testbed

and I also went for ceph-nautilus instead of octopus... cause when I tried octopus in the morning, glance was failing due to "could not find ceph keyrings).

Currently, my heat stack status is "CREATE_FAILED" but its still bootstraping openstack services at

TASK [neutron : Creating Neutron database user and setting permissions] ********
changed: [testbed-node-0.osism.local -> 192.168.40.10]

TASK [neutron : include_tasks] *************************************************
included: /ansible/roles/neutron/tasks/bootstrap_service.yml for testbed-node-0.osism.local, testbed-node-1.osism.local

TASK [neutron : Running Neutron bootstrap container] ***************************

So I assume it should be finished in the end. Also, I wanted to ask how can I access services? I tried sshuttle, tried even "sudo route add -net 192.168.40.0/24 gw 10.0.2.12(neutron router ip address)" however, still cannot open services locally. Any advice on that one?

Thank you

berendt commented 4 years ago

and I also went for ceph-nautilus instead of octopus... cause when I tried octopus in the morning, glance was failing due to "could not find ceph keyrings).

Ocotpus is not yet well tested and the upstream is still very active. That's why errors occur here from time to time. Nautilus is therefore set as default and is what we currently deploy.

Currently, my heat stack status is "CREATE_FAILED" but its still bootstraping openstack services at

Stack timeout reached?

So I assume it should be finished in the end. Also, I wanted to ask how can I access services? I tried sshuttle, tried even "sudo route add -net 192.168.40.0/24 gw 10.0.2.12(neutron router ip address)" however, still cannot open services locally. Any advice on that one?

Run make sshuttle. The APIs/Horizon can then be accessed under 192.168.50.200. Created instances are only accessible via the manager. I'll change that.

ghost commented 4 years ago

and I also went for ceph-nautilus instead of octopus... cause when I tried octopus in the morning, glance was failing due to "could not find ceph keyrings).

Ocotpus is not yet well tested and the upstream is still very active. That's why errors occur here from time to time. Nautilus is therefore set as default and is what we currently deploy.

Currently, my heat stack status is "CREATE_FAILED" but its still bootstraping openstack services at

Stack timeout reached?

So I assume it should be finished in the end. Also, I wanted to ask how can I access services? I tried sshuttle, tried even "sudo route add -net 192.168.40.0/24 gw 10.0.2.12(neutron router ip address)" however, still cannot open services locally. Any advice on that one?

Run make sshuttle. The APIs/Horizon can then be accessed under 192.168.50.200. Created instances are only accessible via the manager. I'll change that.

Hi Christian,

I just deployed openstack env-testbed via terraform scripts which you uploaded recently. Used my another linux machine for sshuttle and it worked, however, once I logged in to openstack I got - http://imgur.com/sYGLd7Fl.png "You are not authorized to access this page Login". I am wondering, if this testbed can be used for anything? My point was to test magnum. Kolla's magnum is very unstable, even though I managed to install from master branch with auto-scaler and auto-healer, it was still very broken though.

Using terraform, "make ssh && sshuttle" are not working, I assume vars are wrong. Cockpit has wrong password in docs.

P.S. Wireguard wd be easier than sshuttle yeah :)

Thank you

berendt commented 4 years ago

I am wondering, if this testbed can be used for anything?

The testbed is fully functional. At least Refstack runs through it completely.

Please try an openstack --os-cloud admin token issue from the testbed-manager.

My point was to test magnum. Kolla's magnum is very unstable, even though I managed to install from master branch with auto-scaler and auto-healer, it was still very broken though.

Magnum itself is somewhat unstable. I will add necessary templates to the testbed tomorrow.

Using terraform, "make ssh && sshuttle" are not working, I assume vars are wrong.

I can't confirm that. Works here. What's the error message?

Cockpit has wrong password in docs.

The password is already correct. It just wasn't set. It's fixed (#171).

ghost commented 4 years ago

Hi Christian,

I was trying to deploy testbed again yesterday. keepalived images were broken again. Is there any way I can change images from master to stable branch?

P.S. Also, I was reading docs a lot and I am wondering, what would you say the main difference/comparison of osism and kayobe(kolla's project)?

Thank you and regards

berendt commented 4 years ago

I was trying to deploy testbed again yesterday. keepalived images were broken again. Is there any way I can change images from master to stable branch?

Latest images are working fine. I created an environment with the latest images (built about 48 hours ago) three hours ago. Keepalived and all other images work.

What was the error message?

quay.io/osism/keepalived                  train-latest        83395f55d20a        47 hours ago        212MB

P.S. Also, I was reading docs a lot and I am wondering, what would you say the main difference/comparison of osism and kayobe(kolla's project)?

Kayobe is a CLI for the use of kolla-ansible, which has been developed for about a year and parts will be integrated into our framework in the future.

ghost commented 4 years ago

I was trying to deploy testbed again yesterday. keepalived images were broken again. Is there any way I can change images from master to stable branch?

Latest images are working fine. I created an environment with the latest images (built about 48 hours ago) three hours ago. Keepalived and all other images work.

What was the error message?

quay.io/osism/keepalived                  train-latest        83395f55d20a        47 hours ago        212MB

P.S. Also, I was reading docs a lot and I am wondering, what would you say the main difference/comparison of osism and kayobe(kolla's project)?

Kayobe is a CLI for the use of kolla-ansible, which has been developed for about a year and parts will be integrated into our framework in the future.

HI,

Deployment: make deploy-openstack ENVIRONMENT=environment.yml (using terraform)

TASK [haproxy : Deploy haproxy containers] *************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/haproxy:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "haproxy", "value": {"container_name": "haproxy", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/haproxy:train-latest", "privileged": true, "volumes": ["/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_SkmE6x/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/haproxy:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "haproxy", "value": {"container_name": "haproxy", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/haproxy:train-latest", "privileged": true, "volumes": ["/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_ef1aZG/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/keepalived:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keepalived", "value": {"container_name": "keepalived", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/keepalived:train-latest", "privileged": true, "volumes": ["/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "/lib/modules:/lib/modules:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_VJhsBk/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/keepalived:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keepalived", "value": {"container_name": "keepalived", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/keepalived:train-latest", "privileged": true, "volumes": ["/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "/lib/modules:/lib/modules:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_ditZPc/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

RUNNING HANDLER [haproxy : Restart haproxy container] **************************

RUNNING HANDLER [haproxy : Restart keepalived container] ***********************

PLAY RECAP *********************************************************************
testbed-manager.osism.local : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
testbed-node-0.osism.local : ok=21   changed=10   unreachable=0    failed=1    skipped=4    rescued=0    ignored=0
testbed-node-1.osism.local : ok=21   changed=10   unreachable=0    failed=1    skipped=4    rescued=0    ignored=0
testbed-node-2.osism.local : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

After this error, ceph deployment started. Once ceph deployment was finished:

TASK [keystone : Check keystone containers] ************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone:train-latest', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5000', 'listen_port': '5000'}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'port': '5000', 'listen_port': '5000'}, 'keystone_admin': {'enabled': True, 'mode': 'http', 'external': False, 'port': '35357', 'listen_port': '35357'}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone", "value": {"container_name": "keystone", "dimensions": {}, "enabled": true, "group": "keystone", "haproxy": {"keystone_admin": {"enabled": true, "external": false, "listen_port": "35357", "mode": "http", "port": "35357"}, "keystone_external": {"enabled": true, "external": true, "listen_port": "5000", "mode": "http", "port": "5000"}, "keystone_internal": {"enabled": true, "external": false, "listen_port": "5000", "mode": "http", "port": "5000"}}, "image": "quay.io/osism/keystone:train-latest", "volumes": ["/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_v1B1PP/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone:train-latest', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5000', 'listen_port': '5000'}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'port': '5000', 'listen_port': '5000'}, 'keystone_admin': {'enabled': True, 'mode': 'http', 'external': False, 'port': '35357', 'listen_port': '35357'}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone", "value": {"container_name": "keystone", "dimensions": {}, "enabled": true, "group": "keystone", "haproxy": {"keystone_admin": {"enabled": true, "external": false, "listen_port": "35357", "mode": "http", "port": "35357"}, "keystone_external": {"enabled": true, "external": true, "listen_port": "5000", "mode": "http", "port": "5000"}, "keystone_internal": {"enabled": true, "external": false, "listen_port": "5000", "mode": "http", "port": "5000"}}, "image": "quay.io/osism/keystone:train-latest", "volumes": ["/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_fWp65m/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-ssh:train-latest', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-ssh", "value": {"container_name": "keystone_ssh", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-ssh:train-latest", "volumes": ["/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_x3rOE9/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-ssh:train-latest', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-ssh", "value": {"container_name": "keystone_ssh", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-ssh:train-latest", "volumes": ["/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_ILOIUP/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-fernet:train-latest', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-fernet", "value": {"container_name": "keystone_fernet", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-fernet:train-latest", "volumes": ["/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_gBy6Ly/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-fernet:train-latest', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-fernet", "value": {"container_name": "keystone_fernet", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-fernet:train-latest", "volumes": ["/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_558mz6/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

RUNNING HANDLER [keystone : Restart keystone container] ************************

RUNNING HANDLER [keystone : Restart keystone-ssh container] ********************

RUNNING HANDLER [keystone : Restart keystone-fernet container] *****************

PLAY RECAP *********************************************************************
testbed-manager.osism.local : ok=9    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
testbed-node-0.osism.local : ok=22   changed=6    unreachable=0    failed=1    skipped=7    rescued=0    ignored=0
testbed-node-1.osism.local : ok=20   changed=6    unreachable=0    failed=1    skipped=6    rescued=0    ignored=0
testbed-node-2.osism.local : ok=9    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
TASK [Create test project] *****************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Failed to discover available identity versions when contacting http://api.osism.local:5000/v3. Attempting to parse version from URL.\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 160, in _new_conn\n    (self._dns_host, self.port), self.timeout, **extra_kw\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 84, in create_connection\n    raise err\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 74, in create_connection\n    sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 677, in urlopen\n    chunked=chunked,\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 392, in _make_request\n    conn.request(method, url, **httplib_request_kw)\n  File \"/usr/lib/python3.6/http/client.py\", line 1264, in request\n    self._send_request(method, url, body, headers, encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1310, in _send_request\n    self.endheaders(body, encode_chunked=encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1259, in endheaders\n    self._send_output(message_body, encode_chunked=encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1038, in _send_output\n    self.send(msg)\n  File \"/usr/lib/python3.6/http/client.py\", line 976, in send\n    self.connect()\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 187, in connect\n    conn = self._new_conn()\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 172, in _new_conn\n    self, \"Failed to establish a new connection: %s\" % e\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 449, in send\n    timeout=timeout\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 725, in urlopen\n    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py\", line 439, in increment\n    raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1004, in _send_request\n    resp = self.session.request(method, url, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 530, in request\n    resp = self.send(prep, **send_kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 643, in send\n    r = adapter.send(request, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 516, in send\n    raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_os_project_payload_zmu7qz1l/ansible_os_project_payload.zip/ansible/modules/cloud/openstack/os_project.py\", line 211, in <module>\n  File \"/tmp/ansible_os_project_payload_zmu7qz1l/ansible_os_project_payload.zip/ansible/modules/cloud/openstack/os_project.py\", line 174, in main\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 99, in get_project\n    domain_id=domain_id)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_utils.py\", line 205, in _get_entity\n    entities = search(name_or_id, filters, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 84, in search_projects\n    domain_id=domain_id, name_or_id=name_or_id, filters=filters)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 56, in list_projects\n    if self._is_client_version('identity', 3):\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/openstackcloud.py\", line 461, in _is_client_version\n    client = getattr(self, client_name)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n    'identity', min_version=2, max_version='3.latest')\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/openstackcloud.py\", line 408, in _get_versioned_client\n    if adapter.get_endpoint():\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/adapter.py\", line 282, in get_endpoint\n    return self.session.get_endpoint(auth or self.auth, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1225, in get_endpoint\n    return auth.get_endpoint(self, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n    allow_version_hack=allow_version_hack, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n    service_catalog = self.get_access(session).service_catalog\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n    self.auth_ref = self.get_auth_ref(session)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/generic/base.py\", line 208, in get_auth_ref\n    return self._plugin.get_auth_ref(session, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/v3/base.py\", line 184, in get_auth_ref\n    authenticated=False, log=False, **rkwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1131, in post\n    return self.request(url, 'POST', **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 913, in request\n    resp = send(**kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1020, in _send_request\n    raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://api.osism.local:5000/v3/auth/tokens: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host',))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
testbed-manager.osism.local : ok=3    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
TASK [service-ks-register : heat | Creating services] **************************
FAILED - RETRYING: heat | Creating services (5 retries left).
FAILED - RETRYING: heat | Creating services (4 retries left).
FAILED - RETRYING: heat | Creating services (3 retries left).
FAILED - RETRYING: heat | Creating services (2 retries left).

Thus, its logical that every single openstack service will be failing cause haproxy and keepalived containers are not running.

dragon@testbed-node-0:~$ sudo docker ps
CONTAINER ID        IMAGE                               COMMAND                  CREATED             STATUS              PORTS               NAMES
07f8b15064d0        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   5 minutes ago       Up 5 minutes                            ceph-rgw-testbed-node-0-rgw0
1e5d840866ac        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   11 minutes ago      Up 11 minutes                           ceph-mds-testbed-node-0
c094c4e577ce        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   15 minutes ago      Up 15 minutes                           ceph-osd-3
f8cf6abc2adc        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   15 minutes ago      Up 15 minutes                           ceph-osd-1
2f6b1999747e        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   17 minutes ago      Up 17 minutes                           ceph-mgr-testbed-node-0
87a0ce0bb686        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   19 minutes ago      Up 18 minutes                           ceph-mon-testbed-node-0

Manager is fine though

dragon@testbed-manager:~$ sudo docker ps
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS                    PORTS                                           NAMES
ac65f95b174e        osism/cephclient:nautilus               "/usr/bin/dumb-init …"   4 minutes ago       Up 4 minutes                                                              cephclient_cephclient_1
c844d430be1e        osism/openstackclient:train             "/usr/bin/dumb-init …"   33 minutes ago      Up 33 minutes                                                             openstackclient_openstackclient_1
d9b9f7fb6985        osism/phpmyadmin:latest                 "/docker-entrypoint.…"   34 minutes ago      Up 34 minutes (healthy)   192.168.40.5:8110->80/tcp                       phpmyadmin_phpmyadmin_1
9ee8d2b6585c        rpardini/docker-registry-proxy:latest   "/entrypoint.sh"         53 minutes ago      Up 53 minutes (healthy)   80/tcp, 8081/tcp, 192.168.50.5:8000->3128/tcp   registry-proxy
4356eeb879a9        osism/kolla-ansible:train               "/usr/bin/dumb-init …"   59 minutes ago      Up 37 minutes                                                             manager_kolla-ansible_1
7db050107c5d        osism/ceph-ansible:nautilus             "/usr/bin/dumb-init …"   59 minutes ago      Up 37 minutes                                                             manager_ceph-ansible_1
a837cf60d126        osism/osism-ansible:latest              "/usr/bin/dumb-init …"   59 minutes ago      Up 37 minutes                                                             manager_osism-ansible_1
64bd877eb04d        osism/ara-server:latest                 "sh -c '/wait && /ru…"   59 minutes ago      Up 37 minutes (healthy)   192.168.40.5:8120->8000/tcp                     manager_ara-server_1
414c33fd2c5b        osism/mariadb:latest                    "docker-entrypoint.s…"   59 minutes ago      Up 37 minutes (healthy)   3306/tcp                                        manager_database_1
531992da510a        osism/redis:latest                      "docker-entrypoint.s…"   59 minutes ago      Up 37 minutes (healthy)   6379/tcp                                        manager_cache_1

Thank you

berendt commented 4 years ago

Just make one make deploy (with your environment), please. Then only the manager and the nodes are prepared. This saves time during debugging.

Then run osism-kolla deploy common and then run osism-kolla deploy haproxy. This then only deployed the Haproxy.

Please look at testbed-node-0 and testbed-node-1 to see what the error of the Haproxy or Keepalived is.

Alternatively if the environment is still running you can purge OpenStack and Ceph (https://docs.osism.de/testbed/usage.html#purge-services). Then you do not have to completely rebuild.

osism-kolla _ purge
osism-ceph purge-container-cluster
berendt commented 4 years ago

Please reopen if there is any further need.