oVirt / ovirt-ansible-collection

Ansible collection with official oVirt modules and roles
72 stars 91 forks source link

ovirt_disk: Fault reason is "Operation Failed". Fault detail is "[Internal Engine Error]". HTTP response code is 400. #429

Open mattpoel opened 2 years ago

mattpoel commented 2 years ago
SUMMARY

We have a playbook to add / remove direct LUNs to VMs in ovirt. Since an update of ovirt, this process is no longer properly working and fails with an internal error without details:

"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Internal Engine Error]\". HTTP response code is 400."

ovirt_disk is updated to the latest version (or the whole ovirt-ansible-collection to be specific).

COMPONENT NAME

ovirt.ovirt.ovirt_disk

STEPS TO REPRODUCE
  - name: "ovirt / OLVM -> Add Direct LUN (not activated)"
    delegate_to: localhost
    ovirt.ovirt.ovirt_disk:
      auth: "{{ ovirt_auth }}"
      name: "testdbcl_RDATA02"
      host: "tdbkvm"
      interface: virtio_scsi
      vm_name: "testdbcl1"
      propagate_errors: True
      shareable: True
      activate: no
      scsi_passthrough: disabled
      logical_unit:
        id: "36000144000000010f04ca7574b4d02b9"
        storage_type: fcp
EXPECTED RESULTS

LUN gets configured as a direct LUN and attached to the VM.

ACTUAL RESULTS

ovirt_disk creates the disk, but it doesn't get attached to the VM. Following error is raised:

TASK [OLVM / testdbcl1 -> Add Direct LUNs (not activated)] ****************************************************************************
task path: /app/homes/ansible/ANSIBLE/linux/OLVM-01-Reconfigure_Direct_LUNs_without_SCSI_PT.yml:56
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<localhost> EXEC /bin/sh -c 'echo ~ansible && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /app/hsthome/ansible/.ansible/tmp `"&& mkdir "` echo /app/hsthome/ansible/.ansible/tmp/ansible-tmp-1645093781.6571941-15388-77213271880759 `" && echo ansible-tmp-1645093781.6571941-15388-77213271880759="` echo /app/hsthome/ansible/.ansible/tmp/ansible-tmp-1645093781.6571941-15388-77213271880759 `" ) && sleep 0'
Using module file /app/ansible/olvm/tools/Python-3.9.10/lib/python3.9/site-packages/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk.py
<localhost> PUT /app/homes/ansible/.ansible/tmp/ansible-local-15320ebmad1g5/tmp3lbpve6h TO /app/homes/ansible/.ansible/tmp/ansible-tmp-1645093781.6571941-15388-77213271880759/AnsiballZ_ovirt_disk.py
<localhost> EXEC /bin/sh -c 'chmod u+x /app/hsthome/ansible/.ansible/tmp/ansible-tmp-1645093781.6571941-15388-77213271880759/ /app/hsthome/ansible/.ansible/tmp/ansible-tmp-1645093781.6571941-15388-77213271880759/AnsiballZ_ovirt_disk.py && sleep 0'
<localhost> EXEC /bin/sh -c '/app/ansible/olvm/tools/Python-3.9.10/bin/python3.9 /app/hsthome/ansible/.ansible/tmp/ansible-tmp-1645093781.6571941-15388-77213271880759/AnsiballZ_ovirt_disk.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /app/hsthome/ansible/.ansible/tmp/ansible-tmp-1645093781.6571941-15388-77213271880759/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_ovirt.ovirt.ovirt_disk_payload_k93xumkn/ansible_ovirt.ovirt.ovirt_disk_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk.py", line 911, in main
  File "/app/ansible/olvm/tools/Python-3.9.10/lib/python3.9/site-packages/ovirtsdk4/services.py", line 38686, in refresh_lun
    return self._internal_action(action, 'refreshlun', None, headers, query, wait)
  File "/app/ansible/olvm/tools/Python-3.9.10/lib/python3.9/site-packages/ovirtsdk4/service.py", line 299, in _internal_action
    return future.wait() if wait else future
  File "/app/ansible/olvm/tools/Python-3.9.10/lib/python3.9/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/app/ansible/olvm/tools/Python-3.9.10/lib/python3.9/site-packages/ovirtsdk4/service.py", line 296, in callback
    self._check_fault(response)
  File "/app/ansible/olvm/tools/Python-3.9.10/lib/python3.9/site-packages/ovirtsdk4/service.py", line 134, in _check_fault
    self._raise_error(response, body.fault)
  File "/app/ansible/olvm/tools/Python-3.9.10/lib/python3.9/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Internal Engine Error]". HTTP response code is 400.
[WARNING]: Module did not set no_log for pass_discard
failed: [testdbcl1 -> localhost] (item={'name': 'RDATA02', 'id': '36000144000000010f04ca7574b4d02b9'}) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "activate": false,
            "auth": {
                "ca_file": null,
                "compress": true,
                "headers": null,
                "hostname": null,
                "insecure": true,
                "kerberos": false,
                "password": null,
                "timeout": 0,
                "token": "asdfasdfasdfasdfasdf",
                "url": "https://btolvm.acmetest.local/ovirt-engine/api",
                "username": null
            },
            "backup": null,
            "bootable": null,
            "content_type": "data",
            "description": null,
            "download_image_path": null,
            "fetch_nested": false,
            "force": false,
            "format": "cow",
            "host": "tdbkvm",
            "id": "1f1d207c-a4df-4015-a1cc-4c3ccfb37c2a",
            "image_provider": null,
            "interface": "virtio_scsi",
            "logical_unit": {
                "id": "36000144000000010f04ca7574b4d02b9",
                "storage_type": "fcp"
            },
            "name": "testdbcl_RDATA02",
            "nested_attributes": [],
            "openstack_volume_type": null,
            "pass_discard": null,
            "poll_interval": 3,
            "profile": null,
            "propagate_errors": true,
            "quota_id": null,
            "scsi_passthrough": "disabled",
            "shareable": true,
            "size": null,
            "sparse": null,
            "sparsify": null,
            "state": "present",
            "storage_domain": null,
            "storage_domains": null,
            "timeout": 180,
            "upload_image_path": null,
            "uses_scsi_reservation": null,
            "vm_id": null,
            "vm_name": "testdbcl1",
            "wait": true,
            "wipe_after_delete": null
        }
    },
    "item": {
        "id": "36000144000000010f04ca7574b4d02b9",
        "name": "RDATA02"
    },
    "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Internal Engine Error]\". HTTP response code is 400."
}
mattpoel commented 2 years ago

oVirt / OLVM version is 4.3.10.4-1.0.22.el7

mattpoel commented 2 years ago

Actually the disk even gets assigned and it is unclear why this internal engine error is raised: 2022-02-17_12-24-20

MightyS33 commented 1 year ago

Same/similar issue here. ovirt_disk creates the disk, but i get the following error:

The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_ovirt.ovirt.ovirt_disk_payload_0wr1xxin/ansible_ovirt.ovirt.ovirt_disk_p
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 39273, in refresh_lun
    return self._internal_action(action, 'refreshlun', None, headers, query, wait)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 299, in _internal_acti
    return future.wait() if wait else future
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 296, in callback
    self._check_fault(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 134, in _check_fault
    self._raise_error(response, body.fault)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Internal Engine Error]

It seems that there are problems with synchronizing the LUNs. In the events of the engine the message appears: Direct LUN synchronization started

Here is my ansible task:

- name: Create Direct LUN Disk
  ovirt.ovirt.ovirt_disk:
    auth: "{{ ovirt_auth }}"
    name: "{{ vm_name }}_fc1"
    host: {{ direct_lun_host }}
    logical_unit:
      id: {{ lun_id }}
      storage_type: fcp