tsuna-server / build-server-ansible

1 stars 0 forks source link

Add a role to have roles 'compute', 'cinder', 'swift' #113

Closed TsutomuNakamura closed 1 year ago

TsutomuNakamura commented 1 year ago

To implement "The way that the some nodes have only responsibilities to store external data", fix scenarios.

TsutomuNakamura commented 1 year ago

I will create a new group comstorages that has features storages and computes.

TsutomuNakamura commented 1 year ago
TsutomuNakamura commented 1 year ago
TASK [ceph : Push the file to dev-controller01 for controller nodes of Ceph] **********************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
failed: [dev-controller01] (item=/etc/ceph/ceph.client.glance.keyring) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/ceph/ceph.client.glance.keyring", "msg": "Could not find or access '.buffer/ceph.client.glance.keyring'\nSearched in:\n\t/opt/ansible/roles/ceph/files/.buffer/ceph.client.glance.keyring\n\t/opt/ansible/roles/ceph/.buffer/ceph.client.glance.keyring\n\t/opt/ansible/roles/ceph/tasks/push_resources_to_other_nodes/to_controllers/files/.buffer/ceph.client.glance.keyring\n\t/opt/ansible/roles/ceph/tasks/push_resources_to_other_nodes/to_controllers/.buffer/ceph.client.glance.keyring\n\t/opt/ansible/files/.buffer/ceph.client.glance.keyring\n\t/opt/ansible/.buffer/ceph.client.glance.keyring on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
failed: [dev-controller01] (item=/etc/ceph/ceph.client.cinder.keyring) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/ceph/ceph.client.cinder.keyring", "msg": "Could not find or access '.buffer/ceph.client.cinder.keyring'\nSearched in:\n\t/opt/ansible/roles/ceph/files/.buffer/ceph.client.cinder.keyring\n\t/opt/ansible/roles/ceph/.buffer/ceph.client.cinder.keyring\n\t/opt/ansible/roles/ceph/tasks/push_resources_to_other_nodes/to_controllers/files/.buffer/ceph.client.cinder.keyring\n\t/opt/ansible/roles/ceph/tasks/push_resources_to_other_nodes/to_controllers/.buffer/ceph.client.cinder.keyring\n\t/opt/ansible/files/.buffer/ceph.client.cinder.keyring\n\t/opt/ansible/.buffer/ceph.client.cinder.keyring on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
TsutomuNakamura commented 1 year ago

Insturctions will stop after...

TASK [ceph : Get version of Ceph manager protocol] ********************************************************************
TsutomuNakamura commented 1 year ago
TASK [verify_swift : Run "swift stat" to verify a Swift] ************************************************************************
skipping: [dev-comstorage01]
skipping: [dev-comstorage02]
skipping: [dev-comstorage03]
fatal: [dev-controller01]: FAILED! => {"changed": true, "cmd": ["swift", "stat"], "delta": "0:00:36.665297", "end": "2023-09-02 03:57:36.293751", "msg": "non-zero return code", "rc": 1, "start": "2023-09-02 03:56:59.628454", "stderr": "Account HEAD failed: http://dev-controller01:8080/v1/AUTH_c30af729e1bf47d0a185c762c3baf65c 503 Service Unavailable\nFailed Transaction ID: txb9d87d7e227a4b43a70dc-0064f2b2b0", "stderr_lines": ["Account HEAD failed: http://dev-controller01:8080/v1/AUTH_c30af729e1bf47d0a185c762c3baf65c 503 Service Unavailable", "Failed Transaction ID: txb9d87d7e227a4b43a70dc-0064f2b2b0"], "stdout": "", "stdout_lines": []}
TsutomuNakamura commented 1 year ago

Duplication

TASK [post_configuration : Declare host group] *****************
ok: [dev-controller01]
fatal: [dev-comstorage01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'group_name' is undefined. 'group_name' is undefined

The error appears to be in '/opt/ansible/roles/post_configuration/tasks/configure_bridge_for_ovn/main.yml': line 19, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: Declare host group
  ^ here
"}
TsutomuNakamura commented 1 year ago
2023-09-03 08:44:08.364 1962 ERROR nova.scheduler.utils [None req-b898f91c-e6cc-4e46-8d7d-2fb74785c5e0 2ef6ee7a6d70487098b89e2588bfca12 3365531769be468987be8ec21555223b - - default default] [instance: b50e64f8-a1dc-4aac-9655-6e1dd5d3144d] Error from last host: dev-comstorage02 (node dev-comstorage02): ['Traceback (most recent call last):
', '  File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2517, in _build_and_run_instance
    self.driver.spawn(context, instance, image_meta,
', '  File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4369, in spawn
    self._create_guest_with_network(
', '  File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7726, in _create_guest_with_network
    with excutils.save_and_reraise_exception():
', '  File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__
    self.force_reraise()
', '  File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise
    raise self.value
', '  File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7704, in _create_guest_with_network
    guest = self._create_guest(
', '  File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7643, in _create_guest
    guest.launch(pause=pause)
', '  File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 167, in launch
    with excutils.save_and_reraise_exception():
', '  File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__
    self.force_reraise()
', '  File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise
    raise self.value
', '  File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 165, in launch
    return self._domain.createWithFlags(flags)
', '  File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit
    result = proxy_call(self._autowrap, f, *args, **kwargs)
', '  File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call
    rv = execute(f, *args, **kwargs)
', '  File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute
    six.reraise(c, e, tb)
', '  File "/usr/lib/python3/dist-packages/six.py", line 719, in reraise
    raise value
', '  File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker
    rv = meth(*args, **kwargs)
', '  File "/usr/lib/python3/dist-packages/libvirt.py", line 1385, in createWithFlags
    raise libvirtError(\'virDomainCreateWithFlags() failed\')
', "libvirt.libvirtError: Secret not found: no secret with matching uuid '941e9d51-7ed2-45b6-9737-a3abcaa58c5a'
", '
During handling of the above exception, another exception occurred:

', 'Traceback (most recent call last):
', '  File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2340, in _do_build_and_run_instance
    self._build_and_run_instance(context, instance, image,
', '  File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2613, in _build_and_run_instance
    raise exception.RescheduledException(
', "nova.exception.RescheduledException: Build of instance b50e64f8-a1dc-4aac-9655-6e1dd5d3144d was re-scheduled: Secret not found: no secret with matching uuid '941e9d51-7ed2-45b6-9737-a3abcaa58c5a'
"]
2023-09-03 08:44:08.364 1962 WARNING nova.scheduler.utils [None req-b898f91c-e6cc-4e46-8d7d-2fb74785c5e0 2ef6ee7a6d70487098b89e2588bfca12 3365531769be468987be8ec21555223b - - default default] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance b50e64f8-a1dc-4aac-9655-6e1dd5d3144d. Last exception: Secret not found: no secret with matching uuid '941e9d51-7ed2-45b6-9737-a3abcaa58c5a': nova.exception.MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance b50e64f8-a1dc-4aac-9655-6e1dd5d3144d. Last exception: Secret not found: no secret with matching uuid '941e9d51-7ed2-45b6-9737-a3abcaa58c5a'
2023-09-03 08:44:08.364 1962 WARNING nova.scheduler.utils [None req-b898f91c-e6cc-4e46-8d7d-2fb74785c5e0 2ef6ee7a6d70487098b89e2588bfca12 3365531769be468987be8ec21555223b - - default default] [instance: b50e64f8-a1dc-4aac-9655-6e1dd5d3144d] Setting instance to ERROR state.: nova.exception.MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance b50e64f8-a1dc-4aac-9655-6e1dd5d3144d. Last exception: Secret not found: no secret with matching uuid '941e9d51-7ed2-45b6-9737-a3abcaa58c5a'
TsutomuNakamura commented 1 year ago

Discover compute hosts in nova can be applied only if there are a few minutes of sleep.

TASK [Gathering Facts] ********************************************************
ok: [dev-controller01]

TASK [nova_discover_hosts : Pause for 2 minutes to build app cache] ****************************************************************************
Pausing for 120 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [dev-controller01]

TASK [nova_discover_hosts : Discover compute hosts in nova] *****************************************************************************
changed: [dev-controller01]
TsutomuNakamura commented 1 year ago

comstorages might lacks ./nova/nova.conf:metadata_proxy_shared_secret = <password>.

Is ./neutron/neutron_ovn_metadata_agent.ini:metadata_proxy_shared_secret = <password> needed?