rposudnevskiy / RBDSR

RBDSR - XenServer/XCP-ng Storage Manager plugin for CEPH
GNU Lesser General Public License v2.1
58 stars 23 forks source link

Not compatible to XenOrchestra DeltaBackup #14

Closed mhoffmann75 closed 7 years ago

mhoffmann75 commented 8 years ago

It seems RBDSR is not compatible with XenOrchestra's DeltaBackup - backups simply "fail". Unfortunately no further information from XO and no obvious errors in Xen's SMlog:

Aug 11 21:03:59 pns-xen07 SM: [17729] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:00 pns-xen07 SM: [17729] preit SUCCESS Aug 11 21:04:00 pns-xen07 SM: [17729] vdi_snapshot {'sr_uuid': 'ff12160f-ff09-40bb-a874-1366ad907f44', 'subtask_of': 'DummyRef:|6680d904-5370-bad4-38c0-8f0d3c7249d9|VDI.snapshot', 'vdi_ref': 'OpaqueRef:c7b74e33-c24e-e757-9dda-89a68b638c12', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': '2851fb05-59fd-4159-ad0b-9a3b6219047e', 'host_ref': 'OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df', 'session_ref': 'OpaqueRef:6623272a-04bf-77bb-6650-d29f143b74c6', 'device_config': {'SRmaster': 'true'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:5a051e6e-4e23-beff-e3c7-7f7c926575fc', 'driver_params': {'epochhint': '91d22328-25ca-b5a4-ef34-9e1f46213ffa'}, 'vdi_uuid': '2851fb05-59fd-4159-ad0b-9a3b6219047e'} Aug 11 21:04:00 pns-xen07 SM: [17729] RBDVDI.snapshot for 2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:00 pns-xen07 SM: [17729] ['uuidgen', '-r'] Aug 11 21:04:00 pns-xen07 SM: [17729] preit SUCCESS Aug 11 21:04:00 pns-xen07 SM: [17729] ['rbd', 'image-meta', 'list', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:00 pns-xen07 SM: [17729] preit SUCCESS Aug 11 21:04:00 pns-xen07 SM: [17729] Pause request for 2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:00 pns-xen07 SM: [17729] Calling tap-pause on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:00 pns-xen07 SM: [17816] lock: opening lock file /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:00 pns-xen07 SM: [17816] lock: acquired /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:00 pns-xen07 SM: [17816] Pause for 2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:00 pns-xen07 SM: [17816] Calling tap pause with minor 2 Aug 11 21:04:00 pns-xen07 SM: [17816] ['/usr/sbin/tap-ctl', 'pause', '-p', '7107', '-m', '2'] Aug 11 21:04:00 pns-xen07 SM: [17816] = 0 Aug 11 21:04:00 pns-xen07 SM: [17816] lock: released /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:00 pns-xen07 SM: [17816] lock: closed /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:00 pns-xen07 SM: [17729] Calling _unmap_VHD Aug 11 21:04:00 pns-xen07 SM: [17729] Calling ceph_plugin Aug 11 21:04:00 pns-xen07 SM: [17729] Calling rbd/nbd map on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:00 pns-xen07 SM: [17824] ['realpath', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:00 pns-xen07 SM: [17824] preit SUCCESS Aug 11 21:04:00 pns-xen07 SM: [17824] ['unlink', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:00 pns-xen07 SM: [17824] preit SUCCESS Aug 11 21:04:00 pns-xen07 SM: [17824] ['rbd-nbd', 'unmap', '/dev/nbd2', '--name', 'client.admin'] Aug 11 21:04:00 pns-xen07 SM: [17824] preit SUCCESS Aug 11 21:04:00 pns-xen07 SM: [17729] ['rbd', 'snap', 'create', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:01 pns-xen07 SM: [17729] preit SUCCESS Aug 11 21:04:01 pns-xen07 SM: [17729] ['rbd', 'snap', 'protect', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:01 pns-xen07 SM: [17729] preit SUCCESS Aug 11 21:04:01 pns-xen07 SM: [17729] Calling _map_VHD Aug 11 21:04:01 pns-xen07 SM: [17729] Calling ceph_plugin Aug 11 21:04:01 pns-xen07 SM: [17729] Calling rbd/nbd map on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:02 pns-xen07 SM: [17913] ['rbd-nbd', '--nbds_max', '64', 'map', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e', '--name', 'client.admin'] Aug 11 21:04:02 pns-xen07 SM: [17913] preit SUCCESS Aug 11 21:04:02 pns-xen07 SM: [17913] ['ln', '-s', '/dev/nbd2', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:02 pns-xen07 SM: [17913] preit SUCCESS Aug 11 21:04:02 pns-xen07 SM: [17729] Unpause request for 2851fb05-59fd-4159-ad0b-9a3b6219047e secondary=None Aug 11 21:04:02 pns-xen07 SM: [17729] Calling tap-unpause on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:02 pns-xen07 SM: [17947] lock: opening lock file /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:02 pns-xen07 SM: [17947] lock: acquired /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:02 pns-xen07 SM: [17947] Unpause for 2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:02 pns-xen07 SM: [17947] Realpath: /dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:02 pns-xen07 SM: [17947] Calling tap unpause with minor 2 Aug 11 21:04:02 pns-xen07 SM: [17947] ['/usr/sbin/tap-ctl', 'unpause', '-p', '7107', '-m', '2', '-a', 'aio:/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:02 pns-xen07 SM: [17947] = 0 Aug 11 21:04:02 pns-xen07 SM: [17947] lock: released /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:02 pns-xen07 SM: [17947] lock: closed /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:02 pns-xen07 SM: [17990] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:03 pns-xen07 SM: [17990] preit SUCCESS Aug 11 21:04:03 pns-xen07 SM: [17990] vdi_update {'sr_uuid': 'ff12160f-ff09-40bb-a874-1366ad907f44', 'subtask_of': 'DummyRef:|e602ebc0-6cc3-9649-9724-8c0ace59bfd4|VDI.update', 'vdi_ref': 'OpaqueRef:3e615ac2-6f17-d482-5792-8a688d6683de', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': '0850e980-6cb8-49bf-95d0-d777b1bf438f', 'host_ref': 'OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df', 'session_ref': 'OpaqueRef:c4b1c129-6b9c-7652-0b0d-32ca0d214127', 'device_config': {'SRmaster': 'true'}, 'command': 'vdi_update', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:5a051e6e-4e23-beff-e3c7-7f7c926575fc', 'vdi_uuid': '0850e980-6cb8-49bf-95d0-d777b1bf438f'} Aug 11 21:04:03 pns-xen07 SM: [17990] RBDSR.update for 0850e980-6cb8-49bf-95d0-d777b1bf438f Aug 11 21:04:03 pns-xen07 SM: [17990] ['rbd', 'image-meta', 'set', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e', 'VDI_LABEL', 'Server2012R2 0', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:03 pns-xen07 SM: [17990] preit SUCCESS Aug 11 21:04:03 pns-xen07 SM: [17990] ['rbd', 'image-meta', 'set', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e', 'VDI_DESCRIPTION', 'Created by template provisioner', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:03 pns-xen07 SM: [17990] preit SUCCESS Aug 11 21:04:03 pns-xen07 SM: [17990] ['rbd', 'image-meta', 'set', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e', 'SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '20160811T19:03:59Z', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:03 pns-xen07 SM: [17990] preit SUCCESS Aug 11 21:04:04 pns-xen07 SM: [18140] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:04 pns-xen07 SM: [18140] preit SUCCESS Aug 11 21:04:04 pns-xen07 SM: [18140] vdi_attach {'sr_uuid': 'ff12160f-ff09-40bb-a874-1366ad907f44', 'subtask_of': 'DummyRef:|a14d28d2-c61e-5fef-8e3e-c734b15ac379|VDI.attach', 'vdi_ref': 'OpaqueRef:3e615ac2-6f17-d482-5792-8a688d6683de', 'vdi_on_boot': 'persist', 'args': ['false'], 'vdi_location': '0850e980-6cb8-49bf-95d0-d777b1bf438f', 'host_ref': 'OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df', 'session_ref': 'OpaqueRef:ba6f66ee-876c-ea29-36b8-5f7e41b03fc6', 'device_config': {'SRmaster': 'true'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:5a051e6e-4e23-beff-e3c7-7f7c926575fc', 'vdi_uuid': '0850e980-6cb8-49bf-95d0-d777b1bf438f'} Aug 11 21:04:04 pns-xen07 SM: [18140] lock: opening lock file /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:04 pns-xen07 SM: [18140] result: {'o_direct_reason': 'LICENSE_RESTRICTION', 'params': '/dev/sm/backend/ff12160f-ff09-40bb-a874-1366ad907f44/0850e980-6cb8-49bf-95d0-d777b1bf438f', 'o_direct': True, 'xenstore_data': {'scsi/0x12/0x80': 'AIAAEjA4NTBlOTgwLTZjYjgtNDkgIA==', 'scsi/0x12/0x83': 'AIMAMQIBAC1YRU5TUkMgIDA4NTBlOTgwLTZjYjgtNDliZi05NWQwLWQ3NzdiMWJmNDM4ZiA=', 'vdi-uuid': '0850e980-6cb8-49bf-95d0-d777b1bf438f', 'mem-pool': 'ff12160f-ff09-40bb-a874-1366ad907f44'}} Aug 11 21:04:04 pns-xen07 SM: [18140] lock: closed /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:04 pns-xen07 SM: [18192] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:05 pns-xen07 SM: [18192] preit SUCCESS Aug 11 21:04:05 pns-xen07 SM: [18192] vdi_activate {'sr_uuid': 'ff12160f-ff09-40bb-a874-1366ad907f44', 'subtask_of': 'DummyRef:|dd797ae5-6eb4-2bef-0e64-62524ed3c5e1|VDI.activate', 'vdi_ref': 'OpaqueRef:3e615ac2-6f17-d482-5792-8a688d6683de', 'vdi_on_boot': 'persist', 'args': ['false'], 'vdi_location': '0850e980-6cb8-49bf-95d0-d777b1bf438f', 'host_ref': 'OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df', 'session_ref': 'OpaqueRef:4251c699-a33a-edb6-8e6c-4fd017286939', 'device_config': {'SRmaster': 'true'}, 'command': 'vdi_activate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:5a051e6e-4e23-beff-e3c7-7f7c926575fc', 'vdi_uuid': '0850e980-6cb8-49bf-95d0-d777b1bf438f'} Aug 11 21:04:05 pns-xen07 SM: [18192] lock: opening lock file /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:05 pns-xen07 SM: [18192] blktap2.activate Aug 11 21:04:05 pns-xen07 SM: [18192] lock: acquired /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:05 pns-xen07 SM: [18192] Adding tag to: 0850e980-6cb8-49bf-95d0-d777b1bf438f Aug 11 21:04:05 pns-xen07 SM: [18192] Activate lock succeeded Aug 11 21:04:05 pns-xen07 SM: [18192] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:05 pns-xen07 SM: [18192] preit SUCCESS Aug 11 21:04:05 pns-xen07 SM: [18192] RBDVDI.attach for 0850e980-6cb8-49bf-95d0-d777b1bf438f Aug 11 21:04:05 pns-xen07 SM: [18192] Calling _map_VHD Aug 11 21:04:05 pns-xen07 SM: [18192] Calling ceph_plugin Aug 11 21:04:05 pns-xen07 SM: [18192] Calling rbd/nbd map on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:05 pns-xen07 SM: [18299] ['rbd-nbd', '--nbds_max', '64', 'map', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '--name', 'client.admin'] Aug 11 21:04:06 pns-xen07 SM: [18299] preit SUCCESS Aug 11 21:04:06 pns-xen07 SM: [18299] ['ln', '-s', '/dev/nbd3', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f'] Aug 11 21:04:06 pns-xen07 SM: [18299] preit SUCCESS Aug 11 21:04:06 pns-xen07 SM: [18192] PhyLink(/dev/sm/phy/ff12160f-ff09-40bb-a874-1366ad907f44/0850e980-6cb8-49bf-95d0-d777b1bf438f) -> /dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f Aug 11 21:04:06 pns-xen07 SM: [18192] ['/usr/sbin/tap-ctl', 'allocate'] Aug 11 21:04:06 pns-xen07 SM: [18192] = 0 Aug 11 21:04:06 pns-xen07 SM: [18192] ['/usr/sbin/tap-ctl', 'spawn'] Aug 11 21:04:06 pns-xen07 SM: [18192] = 0 Aug 11 21:04:06 pns-xen07 SM: [18192] ['/usr/sbin/tap-ctl', 'attach', '-p', '18354', '-m', '4'] Aug 11 21:04:06 pns-xen07 SM: [18192] = 0 Aug 11 21:04:06 pns-xen07 SM: [18192] ['/usr/sbin/tap-ctl', 'open', '-p', '18354', '-m', '4', '-a', 'aio:/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '-R'] Aug 11 21:04:06 pns-xen07 SM: [18192] = 0 Aug 11 21:04:06 pns-xen07 SM: [18192] tap.activate: Launched Tapdisk(aio:/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f, pid=18354, minor=4, state=R) Aug 11 21:04:06 pns-xen07 SM: [18192] Attempt to register tapdisk with RRDD as a plugin. Aug 11 21:04:06 pns-xen07 SM: [18192] ERROR: Failed to register tapdisk with RRDD due to UnixStreamHTTP instance has no attribute 'get response' Aug 11 21:04:06 pns-xen07 SM: [18192] DeviceNode(/dev/sm/backend/ff12160f-ff09-40bb-a874-1366ad907f44/0850e980-6cb8-49bf-95d0-d777b1bf438f) -> /dev/xen/blktap-2/tapdev4 Aug 11 21:04:06 pns-xen07 SM: [18192] lock: released /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:06 pns-xen07 SM: [18192] lock: closed /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:06 pns-xen07 SM: [18394] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:06 pns-xen07 SM: [18394] preit SUCCESS Aug 11 21:04:06 pns-xen07 SM: [18394] vdi_deactivate {'sr_uuid': 'ff12160f-ff09-40bb-a874-1366ad907f44', 'subtask_of': 'DummyRef:|f77d95b3-b1ff-148f-9ac5-19842d18777e|VDI.deactivate', 'vdi_ref': 'OpaqueRef:3e615ac2-6f17-d482-5792-8a688d6683de', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': '0850e980-6cb8-49bf-95d0-d777b1bf438f', 'host_ref': 'OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df', 'session_ref': 'OpaqueRef:40a50b63-2865-c3d3-913c-68a6202056f2', 'device_config': {'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:5a051e6e-4e23-beff-e3c7-7f7c926575fc', 'vdi_uuid': '0850e980-6cb8-49bf-95d0-d777b1bf438f'} Aug 11 21:04:06 pns-xen07 SM: [18394] lock: opening lock file /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:06 pns-xen07 SM: [18394] blktap2.deactivate Aug 11 21:04:06 pns-xen07 SM: [18394] lock: acquired /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:06 pns-xen07 SM: [18394] ['/usr/sbin/tap-ctl', 'close', '-p', '18354', '-m', '4'] Aug 11 21:04:06 pns-xen07 SM: [18394] = 0 Aug 11 21:04:06 pns-xen07 SM: [18394] Attempt to deregister tapdisk with RRDD. Aug 11 21:04:06 pns-xen07 SM: [18394] ERROR: Failed to deregister tapdisk with RRDD due to UnixStreamHTTP instance has no attribute 'get response' Aug 11 21:04:06 pns-xen07 SM: [18394] ['/usr/sbin/tap-ctl', 'detach', '-p', '18354', '-m', '4'] Aug 11 21:04:07 pns-xen07 SM: [18394] = 0 Aug 11 21:04:07 pns-xen07 SM: [18394] ['/usr/sbin/tap-ctl', 'free', '-m', '4'] Aug 11 21:04:07 pns-xen07 SM: [18394] = 0 Aug 11 21:04:07 pns-xen07 SM: [18394] tap.deactivate: Shut down Tapdisk(aio:/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f, pid=18354, minor=4, state=R) Aug 11 21:04:07 pns-xen07 SM: [18394] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:07 pns-xen07 SM: [18394] preit SUCCESS Aug 11 21:04:07 pns-xen07 SM: [18394] Calling _unmap_VHD Aug 11 21:04:07 pns-xen07 SM: [18394] Calling ceph_plugin Aug 11 21:04:07 pns-xen07 SM: [18394] Calling rbd/nbd map on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:07 pns-xen07 SM: [18491] ['realpath', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f'] Aug 11 21:04:07 pns-xen07 SM: [18491] preit SUCCESS Aug 11 21:04:07 pns-xen07 SM: [18491] ['unlink', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f'] Aug 11 21:04:07 pns-xen07 SM: [18491] preit SUCCESS Aug 11 21:04:07 pns-xen07 SM: [18491] ['rbd-nbd', 'unmap', '/dev/nbd3', '--name', 'client.admin'] Aug 11 21:04:07 pns-xen07 SM: [18491] preit SUCCESS Aug 11 21:04:07 pns-xen07 SM: [18394] Removed host key host_OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df for 0850e980-6cb8-49bf-95d0-d777b1bf438f Aug 11 21:04:07 pns-xen07 SM: [18394] lock: released /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:07 pns-xen07 SM: [18394] lock: closed /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:07 pns-xen07 SM: [18514] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:08 pns-xen07 SM: [18514] preit SUCCESS Aug 11 21:04:08 pns-xen07 SM: [18514] vdi_detach {'sr_uuid': 'ff12160f-ff09-40bb-a874-1366ad907f44', 'subtask_of': 'DummyRef:|04752e88-2ea6-e5f8-5efe-9cb745726834|VDI.detach', 'vdi_ref': 'OpaqueRef:3e615ac2-6f17-d482-5792-8a688d6683de', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': '0850e980-6cb8-49bf-95d0-d777b1bf438f', 'host_ref': 'OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df', 'session_ref': 'OpaqueRef:a08c365a-7170-8f91-0afc-7da4f305e87f', 'device_config': {'SRmaster': 'true'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:5a051e6e-4e23-beff-e3c7-7f7c926575fc', 'vdi_uuid': '0850e980-6cb8-49bf-95d0-d777b1bf438f'} Aug 11 21:04:08 pns-xen07 SM: [18514] lock: opening lock file /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:08 pns-xen07 SM: [18514] lock: closed /var/lock/sm/0850e980-6cb8-49bf-95d0-d777b1bf438f/vdi Aug 11 21:04:08 pns-xen07 SM: [18561] ['ceph', 'df', '--format', 'json', '--name', 'client.admin'] Aug 11 21:04:09 pns-xen07 SM: [18561] preit SUCCESS Aug 11 21:04:09 pns-xen07 SM: [18561] vdi_delete {'sr_uuid': 'ff12160f-ff09-40bb-a874-1366ad907f44', 'subtask_of': 'DummyRef:|6fbc88f9-e5c9-9925-30e9-a4cbf5f06fd9|VDI.destroy', 'vdi_ref': 'OpaqueRef:3e615ac2-6f17-d482-5792-8a688d6683de', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': '0850e980-6cb8-49bf-95d0-d777b1bf438f', 'host_ref': 'OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df', 'session_ref': 'OpaqueRef:f0596daa-3a49-07ba-1dd7-44937f38643b', 'device_config': {'SRmaster': 'true'}, 'command': 'vdi_delete', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:5a051e6e-4e23-beff-e3c7-7f7c926575fc', 'vdi_uuid': '0850e980-6cb8-49bf-95d0-d777b1bf438f'} Aug 11 21:04:09 pns-xen07 SM: [18561] RBDVDI.delete for 0850e980-6cb8-49bf-95d0-d777b1bf438f Aug 11 21:04:09 pns-xen07 SM: [18561] Pause request for 2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:09 pns-xen07 SM: [18561] Calling tap-pause on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:09 pns-xen07 SM: [18619] lock: opening lock file /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:09 pns-xen07 SM: [18619] lock: acquired /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:09 pns-xen07 SM: [18619] Pause for 2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:09 pns-xen07 SM: [18619] Calling tap pause with minor 2 Aug 11 21:04:09 pns-xen07 SM: [18619] ['/usr/sbin/tap-ctl', 'pause', '-p', '7107', '-m', '2'] Aug 11 21:04:09 pns-xen07 SM: [18619] = 0 Aug 11 21:04:09 pns-xen07 SM: [18619] lock: released /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:09 pns-xen07 SM: [18619] lock: closed /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:09 pns-xen07 SM: [18561] Calling _unmap_VHD Aug 11 21:04:09 pns-xen07 SM: [18561] Calling ceph_plugin Aug 11 21:04:09 pns-xen07 SM: [18561] Calling rbd/nbd map on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:09 pns-xen07 SM: [18627] ['realpath', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:09 pns-xen07 SM: [18627] preit SUCCESS Aug 11 21:04:09 pns-xen07 SM: [18627] ['unlink', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:09 pns-xen07 SM: [18627] preit SUCCESS Aug 11 21:04:09 pns-xen07 SM: [18627] ['rbd-nbd', 'unmap', '/dev/nbd2', '--name', 'client.admin'] Aug 11 21:04:09 pns-xen07 SM: [18627] preit SUCCESS Aug 11 21:04:09 pns-xen07 SM: [18561] ['rbd', 'snap', 'unprotect', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:09 pns-xen07 SM: [18561] preit SUCCESS Aug 11 21:04:09 pns-xen07 SM: [18561] ['rbd', 'snap', 'rm', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e@SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:10 pns-xen07 SM: [18561] preit SUCCESS Aug 11 21:04:10 pns-xen07 SM: [18561] ['rbd', 'image-meta', 'remove', 'VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e', 'SNAP-0850e980-6cb8-49bf-95d0-d777b1bf438f', '--pool', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44', '--name', 'client.admin'] Aug 11 21:04:11 pns-xen07 SM: [18561] preit SUCCESS Aug 11 21:04:11 pns-xen07 SM: [18561] Calling _map_VHD Aug 11 21:04:11 pns-xen07 SM: [18561] Calling ceph_plugin Aug 11 21:04:11 pns-xen07 SM: [18561] Calling rbd/nbd map on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:11 pns-xen07 SM: [18809] ['rbd-nbd', '--nbds_max', '64', 'map', 'RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e', '--name', 'client.admin'] Aug 11 21:04:11 pns-xen07 SM: [18809] preit SUCCESS Aug 11 21:04:11 pns-xen07 SM: [18809] ['ln', '-s', '/dev/nbd2', '/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:11 pns-xen07 SM: [18809] preit SUCCESS Aug 11 21:04:11 pns-xen07 SM: [18561] Unpause request for 2851fb05-59fd-4159-ad0b-9a3b6219047e secondary=None Aug 11 21:04:11 pns-xen07 SM: [18561] Calling tap-unpause on host OpaqueRef:0e9c18cb-c243-2d9e-b4db-7bb854e066df Aug 11 21:04:11 pns-xen07 SM: [18851] lock: opening lock file /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:11 pns-xen07 SM: [18851] lock: acquired /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:11 pns-xen07 SM: [18851] Unpause for 2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:11 pns-xen07 SM: [18851] Realpath: /dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e Aug 11 21:04:11 pns-xen07 SM: [18851] Calling tap unpause with minor 2 Aug 11 21:04:11 pns-xen07 SM: [18851] ['/usr/sbin/tap-ctl', 'unpause', '-p', '7107', '-m', '2', '-a', 'aio:/dev/nbd/RBD_XenStorage-ff12160f-ff09-40bb-a874-1366ad907f44/VHD-2851fb05-59fd-4159-ad0b-9a3b6219047e'] Aug 11 21:04:11 pns-xen07 SM: [18851] = 0 Aug 11 21:04:11 pns-xen07 SM: [18851] lock: released /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi Aug 11 21:04:11 pns-xen07 SM: [18851] lock: closed /var/lock/sm/2851fb05-59fd-4159-ad0b-9a3b6219047e/vdi

Placing the VM on local storage makes it work. So it just happens with VDIs on RBDSR Storage. As mentioned in #11 XenOrchestra's Backup works now. However in DeltaBackup-mode it makes full backup on first backup and incremental on further backups by leaving one snapshot intact. Furthermore DeltaBackup uses the HTTP XenAPI to export the VDI which is a lot faster than exporting via xe, which Backup utilizes. So using DeltaBackup has a lot of improvements over Backup and i would love to see it working with RBDSR volumes.

mhoffmann75 commented 8 years ago

I was able to narrow it down a little further and it seems to me as if HTTP Export via XAPI is broken with RBDSR:

In Xen Orchestra Logs (syslog in my machine) it looks like Xapi produces IO Error :

Aug 16 11:42:25 xo-test xo-server[12387]: [Warn] Possibly unhandled rejection: XapiError: VDI_IO_ERROR(Device I/O errors) Aug 16 11:42:25 xo-test xo-server[12387]: at wrapError (/opt/xo-server/node_modules/xen-api/src/index.js:97:34) Aug 16 11:42:25 xo-test xo-server[12387]: at /source/xapi/index.js:168:51 Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/lodash/_createBaseFor.js:17:11 Aug 16 11:42:25 xo-test xo-server[12387]: at baseForOwn (/opt/xo-server/node_modules/lodash/_baseForOwn.js:13:20) Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/lodash/_createBaseEach.js:17:14 Aug 16 11:42:25 xo-test xo-server[12387]: at forEach (/opt/xo-server/node_modules/lodash/forEach.js:38:10) Aug 16 11:42:25 xo-test xo-server[12387]: at Collection.onAddOrUpdate (/source/xapi/index.js:140:14) Aug 16 11:42:25 xo-test xo-server[12387]: at Collection.patchedError as emit Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/xo-collection/src/collection.js:280:15 Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/lodash._baseeach/index.js:110:11 Aug 16 11:42:25 xo-test xo-server[12387]: at baseForOwn (/opt/xo-server/node_modules/lodash._baseeach/index.js:52:10) Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/lodash._baseeach/index.js:80:14 Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/lodash.foreach/index.js:26:9 Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/xo-collection/src/collection.js:278:14 Aug 16 11:42:25 xo-test xo-server[12387]: at _combinedTickCallback (internal/process/next_tick.js:67:7) Aug 16 11:42:25 xo-test xo-server[12387]: at process._tickCallback (internal/process/next_tick.js:98:9) Aug 16 11:42:25 xo-test xo-server[12387]: Tue, 16 Aug 2016 09:42:25 GMT xo:perf blocked for 75ms Aug 16 11:42:25 xo-test xo-server[12387]: Rejected backup: XapiError: VDI_IO_ERROR(Device I/O errors) Aug 16 11:42:25 xo-test xo-server[12387]: Tue, 16 Aug 2016 09:42:25 GMT xen-api root@192.168.250.148: task.destroy(...) [46ms] ==> string Aug 16 11:42:25 xo-test xo-server[12387]: Tue, 16 Aug 2016 09:42:25 GMT xo:xapi task destroyed: VDI Export (ubuntu1) Aug 16 11:42:25 xo-test xo-server[12387]: Tue, 16 Aug 2016 09:42:25 GMT xo:api admin@admin.net | vm.rollingDeltaBackup(...) [9s] =!> Error: Rolling delta vm backup failed. Aug 16 11:42:25 xo-test xo-server[12387]: Tue, 16 Aug 2016 09:42:25 GMT xo:xapi Deleting VM XO_DELTA_BASE_VM_SNAPSHOT_DeltaBackup Aug 16 11:42:25 xo-test xo-server[12387]: Tue, 16 Aug 2016 09:42:25 GMT xo:xapi Deleting VDI ubuntu1 Aug 16 11:42:25 xo-test xo-server[12387]: Tue, 16 Aug 2016 09:42:25 GMT xo:api admin@admin.net | job.runSequence(...) [9s] ==> boolean Aug 16 11:42:25 xo-test xo-server[12387]: [Warn] Possibly unhandled rejection: Error: Rolling delta vm backup failed. Aug 16 11:42:25 xo-test xo-server[12387]: at _default._callee25$ (/source/xo-mixins/backups.js:539:12) Aug 16 11:42:25 xo-test xo-server[12387]: at tryCatch (/opt/xo-server/node_modules/regenerator-runtime/runtime.js:62:40) Aug 16 11:42:25 xo-test xo-server[12387]: at GeneratorFunctionPrototype.invoke as _invoke Aug 16 11:42:25 xo-test xo-server[12387]: at GeneratorFunctionPrototype.prototype.(anonymous function) as next Aug 16 11:42:25 xo-test xo-server[12387]: at step (/opt/xo-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30) Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:28:20 Aug 16 11:42:25 xo-test xo-server[12387]: at run (/opt/xo-server/node_modules/babel-runtime/node_modules/core-js/library/modules/es6.promise.js:87:22) Aug 16 11:42:25 xo-test xo-server[12387]: at /opt/xo-server/node_modules/babel-runtime/node_modules/core-js/library/modules/es6.promise.js:100:28 Aug 16 11:42:25 xo-test xo-server[12387]: at flush (/opt/xo-server/node_modules/babel-runtime/node_modules/core-js/library/modules/_microtask.js:18:9) Aug 16 11:42:25 xo-test xo-server[12387]: at _combinedTickCallback (internal/process/next_tick.js:67:7) Aug 16 11:42:25 xo-test xo-server[12387]: at process._tickCallback (internal/process/next_tick.js:98:9)

Unfortunately no further details.

As far as i understand XO Delta Backup makes use of the XAPI with incremental export: https://wiki.xen.org/wiki/Disk_import/export_APIs

So basically a HTTP GET from XenServer should export delta backups: http://IP_of_XenServer/export_raw_vdi?vdi=xxx[&base=yyy]

This seems to work for local storage for what i was able to test via browser. But when used with RBDSR storage it seems to fail. Even exporting the entire disk with vdi=uuid_of_vdi (without base=yyy) seems to fail - it only exports a file with this content:

HTTP/1.1 500 Internal Error content-length: 266 content-type:text/html connection:close cache-control:no-cache, no-store

HTTP 500 internal server error

An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.

Additional information

VDI_IO_ERROR: [ Device I/O errors ]

I guess that's the same IO Error that XO Backup Logs from above provide.

Please note: No errors in SMlog.

@rposudnevskiy Any idea whats going on here and how to fix this?

rposudnevskiy commented 8 years ago

Hi Martin,

The problem is that vhd-tool (/usr/bin/vhd-tool) that is used by xe vdi-export doesn't support rbd images. It is intended for work with images in VHD format but RBDSR uses raw images. The capability to export only the changes between the VDI and the snapshot, that is used by Delta backups can be used only with VHD format. If you try to export only deltas you can find messages like this base VDI is not a vhd; cannot compute differences in /var/log/xensource.log As for exporting the entire disk, the vhd-tool uses raw format in this case but catch error too.

Aug 17 11:06:55 HV-VMH0154AE xapi: [ info|HV-VMH0154AE|191666 INET :::80|VDI export R:03671ba94b88|vhd_tool_wrapper] Executing /bin/vhd-tool stream --source-protocol none --source-format raw --source /dev/sm/backend/5aab7115-2d2c-466d-818c-909cff689467/bff7db0c-076f-4386-ad48-fd9a2eb4d87d --destination-protocol none --destination-format raw --destination-fd d909a77e-428a-4940-a9e2-5f55396e0b98 --tar-filename-prefix  --progress --machine --direct --path /dev/mapper:.
Aug 17 11:06:55 HV-VMH0154AE xapi: [error|HV-VMH0154AE|191666 INET :::80|VDI export R:03671ba94b88|vhd_tool_wrapper] vhd-tool failed, returning VDI_IO_ERROR
Aug 17 11:06:55 HV-VMH0154AE xapi: [error|HV-VMH0154AE|191666 INET :::80|VDI export R:03671ba94b88|vhd_tool_wrapper] vhd-tool output: vhd-tool: internal error, uncaught exception:#012          Unix.Unix_error(Unix.EINVAL, "lseek", "")

And I don't know the reason of this error for now. It should work as it use raw format, but it doesn't work.

As workaround you can try to use the native ceph export rbd export-diff rbd export http://ceph.com/dev-notes/incremental-snapshots-with-rbd/ and write some scripts to run them from cron.

rposudnevskiy commented 8 years ago

Hi Martin, I have made some fixes for Issue #14. Actually it's more a hack but not real fixes but I hope it can help you with XO. Please see Add_support_for_xe_vdi-export for details. Please note that exported images can be imported only in RBD repository as they are exported in rbd diff format.

rposudnevskiy commented 7 years ago

With update 982fbf2 XenOrchestra DeltaBackup should be fully supported