From here I attempt to remove the disk using disk remove rbd/disk2_2021-12-15
If I monitor the changes in configuration object rbd/gateway.conf I see that that "epoch" line and the "updated" timestamps at the bottom of the json gets updated/increments when the command is issued but the disk is not removed from the json file and my gwcli command hangs forever.
Further If i look in the rbd-target-api log I see for the disk that fails to delete:
2022-01-05 10:58:45,372 INFO [_internal.py:87:_log()] - IP_REDACTED - - [05/Jan/2022 10:58:45] "DELETE /api/disk/rbd/disk2_2021-12-15 HTTP/1.1" 400 -
If I compare this to a cluster where disk removals are working I see:
I have an initiator who can be seen in gwcli as such:
From here I attempt to remove the disk using
disk remove rbd/disk2_2021-12-15
If I monitor the changes in configuration object rbd/gateway.conf I see that that "epoch" line and the "updated" timestamps at the bottom of the json gets updated/increments when the command is issued but the disk is not removed from the json file and my gwcli command hangs forever.
Further If i look in the rbd-target-api log I see for the disk that fails to delete:
2022-01-05 10:58:45,372 INFO [_internal.py:87:_log()] - IP_REDACTED - - [05/Jan/2022 10:58:45] "DELETE /api/disk/rbd/disk2_2021-12-15 HTTP/1.1" 400 -
If I compare this to a cluster where disk removals are working I see:
2022-01-05 11:38:46,846 INFO [_internal.py:87:_log()] - IP_REDACTED - - [05/Jan/2022 11:38:46] "DELETE /api/_disk/rbd/disk9_2021-11-20 HTTP/1.1" 200 -
noting
_disk
vsdisk
in the path between working/non-working example.I am concerned to restart rbd-target-api to see if it fixes issue as I dont want to exacerbate and Im not sure how to reproduce issue.
running ceph-iscsi-3.4-1.el7.noarch