vacosta94 / VirtBKP

Tool to create oVirt VM' disk backups
http://blog.infratic.com
GNU General Public License v3.0
24 stars 10 forks source link

Error when in creating process of qcow2 image #12

Closed argonzalezm closed 5 years ago

argonzalezm commented 5 years ago

Hello!! Thanks in advance for your project. I've followed the instructions and when I'll try create a backup of a VM in my vm bkpvm, always I've the same problem with any vm in my domain. The error is similar to:

[root@bkpvm VirtBKP]# python backup_vm.py default.conf vmwin10 [INFO] Trying to create snapshot of VM: fa6f11d2-0f2f-46d4-af40-fd4d4b8c51a3 [INFO] Waiting until snapshot creation ends [INFO] Waiting until snapshot creation ends [OK] Snapshot created [INFO] Trying to create a qcow2 file of disk 600783ee-a8a8-425b-a94c-6e528d1a42c7 [INFO] Attach snap disk to bkpvm [INFO] Identifying disk device, this might take a while [INFO] Disk found using serial number [INFO] Creating qcow2 file: /backup_vm/vmwin10/2019-05-07-14/vmwin10_Disk1.qcow2 qemu-img: Could not open '/backup_vm/vmwin10/2019-05-07-14/vmwin10_Disk1.qcow2': Failed to get shared "write" lock Is another process using the image? ERROR to get process status qemu-img: error while reading sector 245760: Input/output error qemu-img: error while reading sector 241664: Input/output error qemu-img: error while reading sector 266240: Input/output error qemu-img: error while reading sector 249856: Input/output error qemu-img: error while reading sector 258048: Input/output error qemu-img: error while reading sector 237568: Input/output error qemu-img: error while reading sector 253952: Input/output error qemu-img: error while reading sector 262144: Input/output error

[ERROR] qcow2 file creation failed [INFO] Dettach snap disk of bkpvm [INFO] Trying to delete snapshot of vmwin10 [INFO] Waiting until snapshot deletion ends [INFO] Waiting until snapshot deletion ends [OK] Snapshot deleted

The VM bkpvm have a disk virtio of 500 Gb and the VMs to backup are 50Gb more or less.

Thanks you in advance. Regards Ángel.

argonzalezm commented 5 years ago

Hello again, I've seen in dmesg: [ 1201.259117] vdb: vdb1 vdb2 vdb3 [ 1206.369495] buffer_io_error: 2 callbacks suppressed [ 1206.369502] Buffer I/O error on dev vdb, logical block 30944, async page read [ 1206.370395] Buffer I/O error on dev vdb, logical block 30944, async page read [ 1206.383439] Buffer I/O error on dev vdb, logical block 30336, async page read [ 1206.384529] Buffer I/O error on dev vdb, logical block 30336, async page read [ 1206.431777] Buffer I/O error on dev vdb, logical block 33344, async page read [ 1206.432818] Buffer I/O error on dev vdb, logical block 33344, async page read [ 1206.449302] Buffer I/O error on dev vdb, logical block 29952, async page read [ 1206.450083] Buffer I/O error on dev vdb, logical block 31296, async page read [ 1206.450092] Buffer I/O error on dev vdb, logical block 31296, async page read [ 1206.451328] Buffer I/O error on dev vdb, logical block 32320, async page read

But the bkpvm dont have a second virtual disk Thanks. Á.

celticguy42 commented 5 years ago

We are encountering a similar error backing up oVirt VMs:

The qcow2 file is created, but it appears to be empty.

I've included output from attempts to back up 2 VMs, plus the log entries from /var/log/messages


./backup_vm.py ./default.conf vm2backup-01

[OK] Connection to oVIrt API success https://ovirt.company.com/ovirt-engine/api [INFO] Trying to create snapshot of VM: fb3865b2-4bc8-4cc8-822d-bf70d56d6ebe [INFO] Waiting until snapshot creation ends [OK] Snapshot created [INFO] Trying to create a qcow2 file of disk 712bb30b-fe7e-4dc7-82ec-857064fae747 [INFO] Attach snap disk to bkpvm [INFO] Identifying disk device, this might take a while [INFO] Disk found using serial number [INFO] Creating qcow2 file: /ovirtbackups//vm2backup-01/2019-07-17-07/vm2backups-01_Disk1.qcow2 qemu-img: Could not open '/ovirtbackups//vm2backup-01/2019-07-17-07/vm2backup-01_Disk1.qcow2': Failed to get shared "write" lock Is another process using the image? ERROR to get process status qemu-img: error while reading sector 9338878: Input/output error qemu-img: error while reading sector 9342974: Input/output error qemu-img: error while reading sector 9347070: Input/output error qemu-img: error while reading sector 9351166: Input/output error qemu-img: error while reading sector 9355262: Input/output error qemu-img: error while reading sector 9326590: Input/output error

[ERROR] qcow2 file creation failed [INFO] Dettach snap disk of bkpvm [INFO] Trying to delete snapshot of vm2backup-01 [INFO] Waiting until snapshot deletion ends [INFO] Waiting until snapshot deletion ends [OK] Snapshot deleted

./backup_vm.py ./default.conf 2ndVM2Backup-01

[OK] Connection to oVIrt API success https://ovirt.company.com/ovirt-engine/api [INFO] Trying to create snapshot of VM: 70188c5f-587a-46b4-b5ea-0adf6d7cfc75 [INFO] Waiting until snapshot creation ends [INFO] Waiting until snapshot creation ends [OK] Snapshot created [INFO] Trying to create a qcow2 file of disk 49af141f-9366-4918-baec-e2a62b6b5c68 [INFO] Attach snap disk to bkpvm [INFO] Identifying disk device, this might take a while [INFO] Disk found using serial number [INFO] Creating qcow2 file: /ovirtbackups//2ndVM2Backup-01/2019-07-17-07/2ndVM2Backup-01_Disk1.qcow2 qemu-img: Could not open '/ovirtbackups//2ndVM2Backup-01/2019-07-17-07/2ndVM2Backup-01_Disk1.qcow2': Failed to get shared "write" lock Is another process using the image? ERROR to get process status qemu-img: error while reading sector 3231744: Input/output error

[ERROR] qcow2 file creation failed [INFO] Dettach snap disk of bkpvm [INFO] Trying to delete snapshot of 2ndVM2Backup-01

[INFO] Waiting until snapshot deletion ends [INFO] Waiting until snapshot deletion ends [OK] Snapshot deleted


Jul 17 07:46:29 vmbackup kernel: pci 0000:00:08.0: BAR 4: assigned [mem 0x80000000-0x80003fff 64bit pref] Jul 17 07:46:29 vmbackup kernel: pci 0000:00:08.0: BAR 1: assigned [mem 0x80004000-0x80004fff] Jul 17 07:46:29 vmbackup kernel: pci 0000:00:08.0: BAR 0: assigned [io 0x1000-0x103f] Jul 17 07:46:29 vmbackup kernel: virtio-pci 0000:00:08.0: enabling device (0000 -> 0003) Jul 17 07:46:30 vmbackup kernel: virtio_blk virtio5: [vdb] 83886080 512-byte logical blocks (42.9 GB/40.0 GiB) Jul 17 07:46:30 vmbackup kernel: vdb: vdb4#012 vdb4: Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1167551, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1167551, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1168063, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1168063, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1168447, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1168447, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1168959, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1168959, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1169471, async page read Jul 17 07:46:35 vmbackup kernel: Buffer I/O error on dev vdb, logical block 1169471, async page read Jul 17 07:54:04 vmbackup kernel: pci 0000:00:08.0: BAR 4: assigned [mem 0x80000000-0x80003fff 64bit pref] Jul 17 07:54:04 vmbackup kernel: pci 0000:00:08.0: BAR 1: assigned [mem 0x80004000-0x80004fff] Jul 17 07:54:04 vmbackup kernel: pci 0000:00:08.0: BAR 0: assigned [io 0x1000-0x103f] Jul 17 07:54:04 vmbackup kernel: virtio-pci 0000:00:08.0: enabling device (0000 -> 0003) Jul 17 07:54:04 vmbackup kernel: virtio_blk virtio5: [vdb] 41943040 512-byte logical blocks (21.4 GB/20.0 GiB) Jul 17 07:54:04 vmbackup kernel: vdb: vdb1 vdb2 Jul 17 07:54:04 vmbackup systemd: Starting LVM2 PV scan on device 252:18... Jul 17 07:54:04 vmbackup lvm: 2 logical volume(s) in volume group "centos_2ndVM2Backup-01" now active Jul 17 07:54:04 vmbackup systemd: Started LVM2 PV scan on device 252:18. Jul 17 07:54:09 vmbackup systemd: Stopping LVM2 PV scan on device 252:18... Jul 17 07:54:09 vmbackup lvm: Device 252:18 not found. Cleared from lvmetad cache. Jul 17 07:54:09 vmbackup systemd: Stopped LVM2 PV scan on device 252:18. Jul 17 07:54:10 vmbackup kernel: buffer_io_error: 2 callbacks suppressed Jul 17 07:54:10 vmbackup kernel: Buffer I/O error on dev vdb, logical block 403968, async page read

celticguy42 commented 5 years ago

I tried the following, in case this helps:

I edited backup_vm_v1.6.py and commented out the lines that deactivate, detach, and delete the snapshot so I could run the qemu-img command manually. Running the command manually seems to work.

Would that perhaps indicate an issue with how qemu-img is being called using subprocess.call?

vacosta94 commented 5 years ago

Can you please share with the following information:

BNKT0P commented 5 years ago

Hi, the same issue:

Error: [INFO] Creating qcow2 file: /backup1//centos7-test1/2019-07-22-14/centos7-test1_SystemDisk.qcow2 qemu-img: Could not open '/backup1//centos7-test1/2019-07-22-14/centos7-test1_SystemDisk.qcow2': Failed to get shared "write" lock Is another process using the image? ERROR to get process status qemu-img: error while reading sector 7036927: Input/output error [ERROR] qcow2 file creation failed

messages log: Jul 22 14:46:41 backup-vm systemd: Started Session 1 of user root. Jul 22 14:52:40 backup-vm kernel: pci 0000:00:08.0: BAR 4: assigned [mem 0x5c0000000-0x5c0003fff 64bit pref] Jul 22 14:52:40 backup-vm kernel: pci 0000:00:08.0: BAR 1: assigned [mem 0x40000000-0x40000fff] Jul 22 14:52:40 backup-vm kernel: pci 0000:00:08.0: BAR 0: assigned [io 0x1000-0x103f] Jul 22 14:52:40 backup-vm kernel: virtio-pci 0000:00:08.0: enabling device (0000 -> 0003) Jul 22 14:52:40 backup-vm kernel: virtio_blk virtio5: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 22 14:52:40 backup-vm kernel: vda: vda1 vda2 Jul 22 14:52:40 backup-vm systemd: Starting LVM2 PV scan on device 252:2... Jul 22 14:52:40 backup-vm lvm: Request to update PV wgHNi0-A35X-MDaf-qvSf-4GYl-rAYg-4Vn9H3 in lvmetad gave response failed. Reason: Ignore duplicate PV Jul 22 14:52:40 backup-vm lvm: WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0). Jul 22 14:52:40 backup-vm systemd: lvm2-pvscan@252:2.service: main process exited, code=exited, status=5/NOTINSTALLED Jul 22 14:52:40 backup-vm systemd: Failed to start LVM2 PV scan on device 252:2. Jul 22 14:52:40 backup-vm systemd: Unit lvm2-pvscan@252:2.service entered failed state. Jul 22 14:52:40 backup-vm systemd: lvm2-pvscan@252:2.service failed. Jul 22 14:52:48 backup-vm kernel: Buffer I/O error on dev vda, logical block 880127, async page read Jul 22 14:52:48 backup-vm kernel: Buffer I/O error on dev vda, logical block 880127, async page read

BNKT0P commented 5 years ago

Renamed VG on target VM, the same error:

Jul 22 16:46:53 bkpvm kernel: pci 0000:00:08.0: BAR 4: assigned [mem 0x5c0000000-0x5c0003fff 64bit pref] Jul 22 16:46:53 bkpvm kernel: pci 0000:00:08.0: BAR 1: assigned [mem 0x40000000-0x40000fff] Jul 22 16:46:53 bkpvm kernel: pci 0000:00:08.0: BAR 0: assigned [io 0x1000-0x103f] Jul 22 16:46:53 bkpvm kernel: virtio-pci 0000:00:08.0: enabling device (0000 -> 0003) Jul 22 16:46:53 bkpvm kernel: virtio_blk virtio5: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 22 16:46:53 bkpvm kernel: vda: vda1 vda2 Jul 22 16:46:53 bkpvm systemd: Starting LVM2 PV scan on device 252:2... Jul 22 16:46:53 bkpvm lvm: WARNING: Not using lvmetad because duplicate PVs were found. Jul 22 16:46:53 bkpvm lvm: WARNING: Autoactivation reading from disk instead of lvmetad. Jul 22 16:46:53 bkpvm lvm: 1 logical volume(s) in volume group "vg_backup1" now active Jul 22 16:46:53 bkpvm lvm: 2 logical volume(s) in volume group "centos_centos7-test1" now active Jul 22 16:46:53 bkpvm lvm: 2 logical volume(s) in volume group "centos" now active Jul 22 16:46:53 bkpvm systemd: Started LVM2 PV scan on device 252:2. Jul 22 16:46:58 bkpvm systemd: Stopping LVM2 PV scan on device 252:2... Jul 22 16:46:58 bkpvm lvm: WARNING: Not using lvmetad because duplicate PVs were found. Jul 22 16:46:58 bkpvm systemd: Stopped LVM2 PV scan on device 252:2. Jul 22 16:46:58 bkpvm kernel: buffer_io_error: 6 callbacks suppressed Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 281594, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 281594, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 279168, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 279168, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 278752, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 278752, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 280704, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 280704, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 279872, async page read Jul 22 16:46:58 bkpvm kernel: Buffer I/O error on dev vda, logical block 279872, async page read

BjarnBachmanVC commented 5 years ago

This is due to change in the qemi-img library. Add the -U flag to line line 9 in virt_pkg_utls.py cmd="qemu-img info "+ qcowfile + "|grep 'virtual size'|awk '{print $4}'|sed 's/(//g'" to cmd="qemu-img info -U "+ qcowfile + "|grep 'virtual size'|awk '{print $4}'|sed 's/(//g'" and it is back to normal :-)

I proposed a change in https://github.com/BjarnBachmanVC/VirtBKP/blob/patch-2/virtbkp_utils.py Not sure if it is I who should perform the pull request or ????? Please advise

BjarnBachmanVC commented 5 years ago

Ok, I made a pull request, but there seems not to be much activity here ...

This is due to change in the qemi-img library. Add the -U flag to line line 9 in virt_pkg_utls.py cmd="qemu-img info "+ qcowfile + "|grep 'virtual size'|awk '{print $4}'|sed 's/(//g'" to cmd="qemu-img info -U "+ qcowfile + "|grep 'virtual size'|awk '{print $4}'|sed 's/(//g'" and it is back to normal :-)

I proposed a change in https://github.com/BjarnBachmanVC/VirtBKP/blob/patch-2/virtbkp_utils.py Not sure if it is I who should perform the pull request or ????? Please advise

vacosta94 commented 5 years ago

This is fixed on the last commit

This issue affect 3 parts of the software, those 3 where addressed in the last commit

Please pull the last version and try