elastio / elastio-snap

kernel module for taking block-level snapshots and incremental backups of Linux block devices
GNU General Public License v2.0
21 stars 6 forks source link

Destroy snapshot cause vm crash in SuSE15-SP3 kernel-5.3.18-150300.59.63-default #155

Closed jamesruic closed 2 years ago

jamesruic commented 2 years ago

I try to test creating snapshot in SuSE15-SP3 kernel-5.3.18-150300.59.63-default, when I delete snapshot it cause my virtual machine crash. My vm is on vmware and use the newest elastio-snap code [Fix module compilation on Linux kernel 5.18] this version.


localhost: ~/elastio-snap/src # elioctl setup-snapshot /dev/sda3 /.snapshot0 0 localhost: ~/elastio-snap/src # elioctl setup-snapshot /dev/sda4 /hana/.snapshot0 1

localhost:~/elastio-snap/src # df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 8.0K 4.0M 1% /dev tmpfs 486M 0 486M 0% /dev/shm tmpfs 195M 8.8M 186M 5% /run tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup /dev/sda3 10G 4.6G 5.5G 46% / /dev/sda1 511M 6.9M 505M 2% /boot/efi /dev/mapper/bak-lvbak 16G 109M 15G 1% /bak /dev/sda4 4.5G 55M 4.5G 2% /hana tmpfs 98M 60K 98M 1% /run/user/467 tmpfs 98M 72K 97M 1% /run/user/0 /dev/sr0 393M 393M 0 100% /run/media/root/SLE-15-SP3-Online-x86_64187..001

localhost:~/elastio-snap/src # cat /proc/elastio-snap-info { "version": "0.11.0", "devices": [ { "minor": 0, "cow_file": "/.snapshot0", "block_device": "/dev/sda3", "max_cache": 314572800, "fallocate": 1073741824, "seq_id": 1, "uuid": "82996efbc7bc490c9747056ad6251cb9", "version": 1, "nr_changed_blocks": 0, "state": 3 }, { "minor": 1, "cow_file": "/.snapshot0", "block_device": "/dev/sda4", "max_cache": 314572800, "fallocate": 482344960, "seq_id": 1, "uuid": "d99dad1a591a4cf8a3228993a085a12e", "version": 1, "nr_changed_blocks": 0, "state": 3 } ] }

localhost:~/elastio-snap/src # elioctl destroy 1 Segmentation fault

here is my dmesg: [ 2464.214217] elastio-snap: ending tracing [ 2464.214218] elastio-snap: thawing 'sda4' [ 2464.214222] elastio-snap: stopping cow thread [ 2464.214264] elastio-snap: stopping mrf thread [ 2464.214280] elastio-snap: freeing gendisk [ 2464.214974] elastio-snap: freeing request queue [ 2464.218020] general protection fault: 0000 [#1] SMP NOPTI [ 2464.218048] CPU: 0 PID: 7003 Comm: fwupd Tainted: G OE N 5.3.18-150300.59.63-default #1 SLE15-SP3 [ 2464.218050] Hardware name: VMware, Inc. VMware7,1/440BX Desktop Reference Platform, BIOS VMW71.00V.16707776.B64.2008070230 08/07/2020 [ 2464.218075] RIP: 0010:x86_indirect_thunk_rax+0x0/0x20 [ 2464.218088] Code: ca eb 08 48 8d 14 8a eb 02 89 ca 0f ae f8 e9 57 62 d1 ff b8 f2 ff ff ff 66 31 d2 e9 91 66 d1 ff 48 8d 0c c8 e9 e1 8c d1 ff 90 e0 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66 66 2e 0f 1f [ 2464.218092] RSP: 0000:ffffb57f82583ab8 EFLAGS: 00010286 [ 2464.218094] RAX: 82fe5b45d0dd5dc3 RBX: 0000000000000000 RCX: 0000000000000000 [ 2464.218095] RDX: 0000000000000f20 RSI: 0000000000000000 RDI: ffff9e0799237200 [ 2464.218097] RBP: ffffb57f82583b30 R08: 00000000ffffefff R09: 0000000000000000 [ 2464.218098] R10: ffffb57f82583b50 R11: 0000000000000000 R12: 00000000ffffffff [ 2464.218100] R13: ffff9e078a083030 R14: ffff9e078a0ba000 R15: ffff9e0799237200 [ 2464.218102] FS: 00007f30732eff80(0000) GS:ffff9e07bea00000(0000) knlGS:0000000000000000 [ 2464.218103] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2464.218105] CR2: 000055a79c35c050 CR3: 00000000231fa005 CR4: 00000000007706f0 [ 2464.218135] PKRU: 55555554 [ 2464.218136] Call Trace: [ 2464.218162] submit_bio_noacct+0x175/0x490 [ 2464.218176] ? submit_bio+0xe3/0x1a0 [ 2464.218177] submit_bio+0xe3/0x1a0 [ 2464.218193] ? swap_slot_free_notify+0xc0/0xc0 [ 2464.218196] ? get_swap_bio+0xaf/0xd0 [ 2464.218198] swap_readpage+0x161/0x250 [ 2464.218206] read_swap_cache_async+0x3e/0x60 [ 2464.218209] swap_cluster_readahead+0x211/0x2b0 [ 2464.218211] ? swapin_readahead+0x95/0x4f0 [ 2464.218220] ? _copy_to_user+0x22/0x30 [ 2464.218222] swapin_readahead+0x95/0x4f0 [ 2464.218231] ? xas_load+0x9/0x80 [ 2464.218235] ? find_get_entry+0x5a/0x160 [ 2464.218237] ? pagecache_get_page+0x30/0x2c0 [ 2464.218244] ? do_swap_page+0x270/0x7e0 [ 2464.218246] do_swap_page+0x270/0x7e0 [ 2464.218256] handle_mm_fault+0x843/0x1260 [ 2464.218271] handle_mm_fault+0xc4/0x200 [ 2464.218280] __do_page_fault+0x2ce/0x500 [ 2464.218289] do_page_fault+0x30/0x110 [ 2464.218292] page_fault+0x3e/0x50 [ 2464.218300] RIP: 0033:0x7f3070df126a [ 2464.218302] Code: 39 ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 66 90 89 f8 31 d2 c5 c5 ef ff 09 f0 25 ff 0f 00 00 3d 80 0f 00 00 0f 8f 56 03 00 00 fe 6f 0f c5 f5 74 06 c5 fd da c1 c5 fd 74 c7 c5 fd d7 c8 85 c9 [ 2464.218305] RSP: 002b:00007ffe69af1648 EFLAGS: 00010287 [ 2464.218306] RAX: 00000000000003d0 RBX: 0000000000000002 RCX: 0000000000000002 [ 2464.218308] RDX: 0000000000000000 RSI: 000055a79c3af390 RDI: 000055a79c35c050 [ 2464.218309] RBP: 000055a79c389840 R08: 000055a79c1ca0a0 R09: 0000000000000000 [ 2464.218311] R10: 000055a79c199120 R11: 00007ffe69af1a30 R12: 000055a79c1cb030 [ 2464.218312] R13: 000055a79c3af390 R14: 000055a79c25c8c0 R15: 000055a79c35a250


After my testing, if you create two or more snapshots and then destroy the newest snapshot, you will get this problem.

e-kov commented 2 years ago

Hi @105590023, Thank you for your interest in elastio-snap. Unfortunately we do not test elastio-snap on SuSE and do not provide package repositories for SuSE. Please find a list of the Linux distros here for which we do testing and provide deb/rpm packages in appropriate repositories. I've tried to reproduce this issue on Ubuntu 20.04 with the kernel 5.4.0-121-generic, closest to your 5.3 version. Also I'm using kvm/qemu VMs. I know, my environment is pretty much different, but the kernel version are almost same. And the issue was not reproduced. Questions which may help to diagnose the problem:

jamesruic commented 2 years ago

@e-kov thank you for the information, Due to your suggestion, I test elastio-snap in Ubuntu 20.04 with the kernel 5.4.0-121-generic and other Ubuntu version. I found this problem also occurs in Ubuntu 21.04 with kernel 5.11.0-16-generic, but driver works good in Ubuntu 22.04 and Ubuntu 20.04.

Here is answer for your questions:


My information in Ubuntu 21.04 with kernel 5.11.0-16-generic:

root@ubuntu21:~/elastio-snap/src# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 198M 1.2M 197M 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 19G 5.0G 13G 29% / tmpfs 990M 0 990M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup /dev/sda2 976M 129M 781M 15% /boot /dev/sda1 511M 5.3M 506M 2% /boot/efi tmpfs 198M 4.0K 198M 1% /run/user/0

root@ubuntu21: ~/elastio-snap/src# elioctl setup-snapshot /dev/mapper/ubuntu--vg-ubuntu--lv /.snapshot0 0 root@ubuntu21: ~/elastio-snap/src# elioctl setup-snapshot /dev/sda2 /boot/.snapshot0 1 root@ubuntu21: ~/elastio-snap/src# elioctl setup-snapshot /dev/sda1 /boot/efi/.snapshot0 2


root@ubuntu21:~/elastio-snap/src# lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT loop0 squashfs 4.0 0 100% /snap/core18/2538 loop1 squashfs 4.0 0 100% /snap/core20/1593 loop2 squashfs 4.0 0 100% /snap/core18/1997 loop3 squashfs 4.0 0 100% /snap/lxd/20037 loop4 squashfs 4.0 0 100% /snap/snapd/16292 loop5 squashfs 4.0 0 100% /snap/lxd/23339 sda ├─sda1 vfat FAT32 09FC-4942 454.6M 11% /boot/efi ├─sda2 ext4 1.0 b430a15b-68c5-42c5-bf74-faa0181bbdd2 678.3M 24% /boot └─sda3 LVM2_member LVM2 001 1HbsZp-NAbO-fF1l-NbdZ-5XzW-Iv9L-VBpqLP └─ubuntu--vg-ubuntu--lv ext4 1.0 f7662ebe-e27c-4b98-83ee-f4c315eeafc4 10.4G 37% / sr0 elastio-snap0 elastio-snap1 elastio-snap2


root@ubuntu21:~/elastio-snap/src# cat /proc/elastio-snap-info { "version": "0.11.0", "devices": [ { "minor": 0, "cow_file": "/.snapshot0", "block_device": "/dev/dm-0", "max_cache": 314572800, "fallocate": 1986002944, "seq_id": 1, "uuid": "238d0e1af0a04620ac2e249a86da3341", "version": 1, "nr_changed_blocks": 422, "state": 3 }, { "minor": 1, "cow_file": "/.snapshot0", "block_device": "/dev/sda2", "max_cache": 314572800, "fallocate": 106954752, "seq_id": 1, "uuid": "a969d0a8296844dc88793143e6a1748f", "version": 1, "nr_changed_blocks": 66, "state": 3 }, { "minor": 2, "cow_file": "/.snapshot0", "block_device": "/dev/sda1", "max_cache": 314572800, "fallocate": 53477376, "seq_id": 1, "uuid": "aeab033ead3241d2968814bd2bd16e6e", "version": 1, "nr_changed_blocks": 0, "state": 3 } ] }

root@ubuntu21: ~/elastio-snap/src# elioctl destroy 2

dmesg: [ 1302.133216] elastio-snap: freeing gendisk [ 1302.133329] elastio-snap: freeing request queue [ 1302.142116] elastio-snap: freeing cow path [ 1302.142118] elastio-snap: destroying cow manager. close method: 0 [ 1302.142130] elastio-snap: freeing base block device path [ 1302.142130] elastio-snap: freeing base block device [ 1302.142147] ------------[ cut here ]------------ [ 1302.142148] WARNING: CPU: 0 PID: 48597 at kernel/module.c:1187 module_put+0x5e/0x70 [ 1302.142154] Modules linked in: elastio_snap(OE) vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock nls_iso8859_1 dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua intel_rapl_msr intel_rapl_common rapl vmw_balloon efi_pstore joydev input_leds serio_raw vmw_vmci mac_hid sch_fq_codel msr ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear hid_generic usbhid hid crct10dif_pclmul crc32_pclmul ghash_clmulni_intel vmwgfx ttm aesni_intel drm_kms_helper crypto_simd syscopyarea sysfillrect cryptd glue_helper sysimgblt fb_sys_fops psmouse cec mptspi rc_core mptscsih drm mptbase vmxnet3 ahci libahci scsi_transport_spi pata_acpi i2c_piix4 [ 1302.142194] CPU: 0 PID: 48597 Comm: elioctl Tainted: G OE 5.11.0-16-generic #17-Ubuntu [ 1302.142196] Hardware name: VMware, Inc. VMware7,1/440BX Desktop Reference Platform, BIOS VMW71.00V.16707776.B64.2008070230 08/07/2020 [ 1302.142197] RIP: 0010:module_put+0x5e/0x70 [ 1302.142199] Code: 8b 05 06 1c 2b 5b 89 c0 48 0f a3 05 bc 91 df 01 73 eb 48 8b 05 d3 54 db 01 48 85 c0 74 09 48 8b 78 08 e8 f5 d2 ff ff 5d c3 c3 <0f> 0b eb c6 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 0f 1f 44 00 [ 1302.142200] RSP: 0018:ffffa5ca41a3fdb0 EFLAGS: 00010297 [ 1302.142202] RAX: 0000000000000000 RBX: ffff990e4b7692c0 RCX: 0000000000000000 [ 1302.142203] RDX: 00000000ffffffff RSI: ffff990e59881dc0 RDI: ffff990e59881dc0 [ 1302.142204] RBP: ffffa5ca41a3fdb0 R08: 0000000000000000 R09: ffffa5ca41a3fbf0 [ 1302.142204] R10: ffffa5ca41a3fbe8 R11: ffffffffa6953508 R12: 0000000000000001 [ 1302.142205] R13: ffff990e4b7692f8 R14: ffff990e49a47200 R15: ffff990e5496fd00 [ 1302.142207] FS: 00007f8e34e54740(0000) GS:ffff990ebdc00000(0000) knlGS:0000000000000000 [ 1302.142208] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1302.142208] CR2: 000056022e01fa64 CR3: 0000000014fd2005 CR4: 00000000007706f0 [ 1302.142229] PKRU: 55555554 [ 1302.142238] Call Trace: [ 1302.142242] blkdev_put+0x67/0x140 [ 1302.142246] tracer_destroy_base_dev+0x62/0x70 [elastio_snap] [ 1302.142250] tracer_destroy+0xc3/0xd0 [elastio_snap] [ 1302.142252] ctrl_ioctl+0x666/0xeb0 [elastio_snap] [ 1302.142254] ? putname+0x4c/0x60 [ 1302.142258] x64_sys_ioctl+0x91/0xc0 [ 1302.142260] ? x64_sys_ioctl+0x91/0xc0 [ 1302.142262] do_syscall_64+0x38/0x90 [ 1302.142267] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 1302.142270] RIP: 0033:0x7f8e34f64ecb [ 1302.142273] Code: ff ff ff 85 c0 79 8b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 6d 1f 0d 00 f7 d8 64 89 01 48 [ 1302.142274] RSP: 002b:00007fffd7c54678 EFLAGS: 00000206 ORIG_RAX: 0000000000000010 [ 1302.142276] RAX: ffffffffffffffda RBX: 0000563b9a0d1010 RCX: 00007f8e34f64ecb [ 1302.142277] RDX: 00007fffd7c5468c RSI: 0000000040044104 RDI: 0000000000000003 [ 1302.142278] RBP: 00007fffd7c546a0 R08: 1999999999999999 R09: 0000000000000000 [ 1302.142279] R10: 00007f8e35074db0 R11: 0000000000000206 R12: 0000563b9a0d0280 [ 1302.142280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 1302.142282] ---[ end trace d7c628236718b5fd ]--- [ 1302.142285] elastio-snap: minor range = 0 - 1 [ 1303.220071] elastio-snap: ioctl command received: 1074020612 [ 1303.220075] elastio-snap: received destroy ioctl - 1 [ 1303.220077] elastio-snap: replacing make_request_fn if needed [ 1303.220086] elastio-snap: freezing 'sda2' [ 1303.220141] kernel tried to execute NX-protected page - exploit attempt? (uid: 0) [ 1303.220273] BUG: unable to handle page fault for address: ffff990e4a8be0d0 [ 1303.220376] #PF: supervisor instruction fetch in kernel mode [ 1303.220469] #PF: error_code(0x0011) - permissions violation [ 1303.220570] PGD 33803067 P4D 33803067 PUD 33804067 PMD a882063 PTE 800000000a8be163 [ 1303.220683] Oops: 0011 [#1] SMP NOPTI [ 1303.220776] CPU: 0 PID: 48497 Comm: kworker/u2:2 Tainted: G W OE 5.11.0-16-generic #17-Ubuntu [ 1303.220885] Hardware name: VMware, Inc. VMware7,1/440BX Desktop Reference Platform, BIOS VMW71.00V.16707776.B64.2008070230 08/07/2020 [ 1303.221067] Workqueue: writeback wb_workfn (flush-8:0) [ 1303.221167] RIP: 0010:0xffff990e4a8be0d0 [ 1303.221259] Code: 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 af 4a 0e 99 ff ff <00> 00 00 00 00 00 00 00 d8 e0 8b 4a 0e 99 ff ff d8 e0 8b 4a 0e 99 [ 1303.221460] RSP: 0018:ffffa5ca41967970 EFLAGS: 00010282 [ 1303.221552] RAX: ffff990e4a8be0d0 RBX: ffff990e49a23790 RCX: 0000000000000000 [ 1303.221651] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff990e4915e300 [ 1303.221744] RBP: ffffa5ca419679d0 R08: 0000000000000001 R09: ffffa5ca41967938 [ 1303.221837] R10: 0000000000000006 R11: ffff990ebffd6000 R12: 00000000ffffffff [ 1303.221941] R13: ffff990e4915e300 R14: ffff990e49a47200 R15: ffff990e449962a0 [ 1303.222034] FS: 0000000000000000(0000) GS:ffff990ebdc00000(0000) knlGS:0000000000000000 [ 1303.222138] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1303.222237] CR2: ffff990e4a8be0d0 CR3: 00000000178f8001 CR4: 00000000007706f0 [ 1303.222365] PKRU: 55555554 [ 1303.222456] Call Trace: [ 1303.222539] ? __submit_bio_noacct+0xd8/0x330 [ 1303.222632] submit_bio_noacct+0x4a/0x90 [ 1303.222723] submit_bio+0x4f/0x1b0 [ 1303.222812] ext4_io_submit+0x4d/0x60 [ 1303.222907] ext4_writepages+0x52f/0x830 [ 1303.223006] ? cpuacct_charge+0x56/0x70 [ 1303.223097] ? update_curr+0x105/0x1c0 [ 1303.223202] do_writepages+0x38/0xc0 [ 1303.223302] ? ttwu_do_activate+0x6e/0xd0 [ 1303.223403] writeback_single_inode+0x44/0x200 [ 1303.223512] writeback_sb_inodes+0x223/0x4d0 [ 1303.223616] wb_writeback+0xbd/0x290 [ 1303.224627] wb_do_writeback+0x7d/0x160 [ 1303.225284] ? set_worker_desc+0xa6/0xb0 [ 1303.225941] wb_workfn+0x72/0x250 [ 1303.226587] ? switch_to+0x192/0x3f0 [ 1303.227205] process_one_work+0x220/0x3c0 [ 1303.227802] worker_thread+0x50/0x370 [ 1303.228379] kthread+0x12f/0x150 [ 1303.228950] ? process_one_work+0x3c0/0x3c0 [ 1303.229492] ? kthread_bind_mask+0x70/0x70 [ 1303.230028] ret_from_fork+0x1f/0x30 [ 1303.230553] Modules linked in: elastio_snap(OE) vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock nls_iso8859_1 dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua intel_rapl_msr intel_rapl_common rapl vmw_balloon efi_pstore joydev input_leds serio_raw vmw_vmci mac_hid sch_fq_codel msr ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear hid_generic usbhid hid crct10dif_pclmul crc32_pclmul ghash_clmulni_intel vmwgfx ttm aesni_intel drm_kms_helper crypto_simd syscopyarea sysfillrect cryptd glue_helper sysimgblt fb_sys_fops psmouse cec mptspi rc_core mptscsih drm mptbase vmxnet3 ahci libahci scsi_transport_spi pata_acpi i2c_piix4 [ 1303.234656] CR2: ffff990e4a8be0d0 [ 1303.235248] ---[ end trace d7c628236718b5fe ]--- [ 1303.254841] RIP: 0010:0xffff990e4a8be0d0 [ 1303.255413] Code: 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 af 4a 0e 99 ff ff <00> 00 00 00 00 00 00 00 d8 e0 8b 4a 0e 99 ff ff d8 e0 8b 4a 0e 99 [ 1303.256585] RSP: 0018:ffffa5ca41967970 EFLAGS: 00010282 [ 1303.257163] RAX: ffff990e4a8be0d0 RBX: ffff990e49a23790 RCX: 0000000000000000 [ 1303.257740] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff990e4915e300 [ 1303.258312] RBP: ffffa5ca419679d0 R08: 0000000000000001 R09: ffffa5ca41967938 [ 1303.258892] R10: 0000000000000006 R11: ffff990ebffd6000 R12: 00000000ffffffff [ 1303.259459] R13: ffff990e4915e300 R14: ffff990e49a47200 R15: ffff990e449962a0 [ 1303.260044] FS: 0000000000000000(0000) GS:ffff990ebdc00000(0000) knlGS:0000000000000000 [ 1303.260621] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1303.261195] CR2: ffff990e4a8be0d0 CR3: 00000000178f8001 CR4: 00000000007706f0 [ 1303.261798] PKRU: 55555554 [ 1303.264111] kernel tried to execute NX-protected page - exploit attempt? (uid: 0) [ 1303.264625] BUG: unable to handle page fault for address: ffff990e4a8be0d0 [ 1303.265130] #PF: supervisor instruction fetch in kernel mode [ 1303.265641] #PF: error_code(0x0011) - permissions violation [ 1303.266156] PGD 33803067 P4D 33803067 PUD 33804067 PMD a882063 PTE 800000000a8be163 [ 1303.266691] Oops: 0011 [#2] SMP NOPTI

e-kov commented 2 years ago

@105590023 thanks a lot for the detailed reply. I had a chance to reproduce the same scenario as you described on Ubuntu 22.04 with the kernel 5.15. The steps are the same as you did:

  1. setup snapshot for the root (mount point /) LVM device with minor 0, fs ext4.
  2. setup snapshot for the boot (mount point /boot) device with minor 1, fs ext2.
  3. setup snapshot for the EFI (mount point /boot/efi) device with minor 2, fs vfat.
  4. destroy snapshot for the EFI device with minor 2.

An interesting point is that the other orders of setup snapshot and destroy calls prevent this problem. For now I have no idea, what is the root cause and what are the workarounds. It could be related to EFI partition with vfat or/and to LVM partition used in this scenario. By the way, I was using kvm-qemu VM. So, this issue is not specific to VMware.

e-kov commented 2 years ago

Hey @105590023. The issue has been resolved recently. Please try the latest commit 8dd510f.

jamesruic commented 2 years ago

hi @e-kov, i've fixed this issue, thank you for your sharing.