Closed ncist2011 closed 2 years ago
Hi, this looks familiar, but I don't recall seeing it in a long time, so it may have been fixed. Like you, my first suspicion would be interference from udev. You're using an old version of lvm, so it would be interesting if you could see it with a recent version. If so please send the full -vvvv (four v's).
Hi, this looks familiar, but I don't recall seeing it in a long time, so it may have been fixed. Like you, my first suspicion would be interference from udev. You're using an old version of lvm, so it would be interesting if you could see it with a recent version. If so please send the full -vvvv (four v's).
my lvm version:
lvm version
LVM version: 2.02.181(2) (2018-08-01)
Library version: 1.02.150 (2018-08-01)
Driver version: 4.39.0
Configuration: ./configure --enable-lvmlockd-sanlock
this is the -vvvv
outputs:
#metadata/vg.c:68 Allocated VG global_lock at 0xaaae10f48df0.
#format_text/import_vsn1.c:591 Importing logical volume global_lock/lvmlock.
#cache/lvmetad.c:1182 Sending lvmetad pending VG global_lock (seqno 2)
#format_text/format-text.c:331 Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:790 Committing global_lock metadata (2) to /dev/mapper/mpatha header at 65536
#locking/locking.c:331 Dropping cache for global_lock.
#metadata/vg.c:83 Freeing VG global_lock at 0xaaae10f61240.
#mm/memlock.c:594 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0
#format_text/archiver.c:576 Creating volume group backup "/etc/lvm/backup/global_lock" (seqno 2).
#format_text/format-text.c:999 Writing global_lock metadata to /etc/lvm/backup/.lvm_kunpeng03_1875425_722094501
#format_text/format-text.c:1018 Renaming /etc/lvm/backup/.lvm_kunpeng03_1875425_722094501 to /etc/lvm/backup/global_lock.tmp
#format_text/format-text.c:1043 Committing global_lock metadata (2)
#format_text/format-text.c:1044 Renaming /etc/lvm/backup/global_lock.tmp to /etc/lvm/backup/global_lock
#metadata/lv.c:1511 Activating logical volume global_lock/lvmlock locally.
#activate/dev_manager.c:779 Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859 dm info LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ] [16384] (*1)
#activate/dev_manager.c:761 Skipping checks for old devices without LVM- dm uuid prefix (kernel vsn 4 >= 3).
#activate/activate.c:1578 global_lock/lvmlock is not active
#locking/file_locking.c:100 Locking LV 9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y (R)
#activate/activate.c:466 activation/volume_list configuration setting not defined: Checking only host tags for global_lock/lvmlock.
#activate/activate.c:2803 Activating global_lock/lvmlock noscan.
#activate/dev_manager.c:779 Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859 dm info LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ] [16384] (*1)
#mm/memlock.c:626 Entering prioritized section (activating).
#mm/memlock.c:489 Raised task priority 0 -> -18.
#activate/dev_manager.c:3225 Creating ACTIVATE tree for global_lock/lvmlock.
#activate/dev_manager.c:779 Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859 dm info LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ opencount flush ] [16384] (*1)
#activate/dev_manager.c:779 Getting device info for global_lock-lvmlock-real [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-real].
#ioctl/libdm-iface.c:1859 dm info LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-real [ opencount flush ] [16384] (*1)
#activate/dev_manager.c:779 Getting device info for global_lock-lvmlock-cow [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-cow].
#ioctl/libdm-iface.c:1859 dm info LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-cow [ opencount flush ] [16384] (*1)
#activate/dev_manager.c:2869 Adding new LV global_lock/lvmlock to dtree
#libdm-deptree.c:604 Not matched uuid LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y in deptree.
#libdm-deptree.c:604 Not matched uuid LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y in deptree.
#activate/dev_manager.c:2791 Checking kernel supports striped segment type for global_lock/lvmlock
#activate/activate.c:522 Getting target version for linear
#ioctl/libdm-iface.c:1859 dm versions [ opencount flush ] [16384] (*1)
#activate/activate.c:559 Found linear target v1.4.0.
#activate/activate.c:522 Getting target version for striped
#ioctl/libdm-iface.c:1859 dm versions [ opencount flush ] [16384] (*1)
#activate/activate.c:559 Found striped target v1.6.0.
#ioctl/libdm-iface.c:1859 dm deps (252:7) [ opencount flush ] [16384] (*1)
#libdm-deptree.c:1944 Creating global_lock-lvmlock
#ioctl/libdm-iface.c:1859 dm create global_lock-lvmlock LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ] [16384] (*1)
#libdm-deptree.c:2696 Loading table for global_lock-lvmlock (252:9).
#libdm-deptree.c:2641 Adding target to (252:9): 0 524288 linear 252:7 22528
#ioctl/libdm-iface.c:1859 dm table (252:9) [ opencount flush ] [16384] (*1)
#ioctl/libdm-iface.c:1859 dm reload (252:9) [ noopencount flush ] [16384] (*1)
#ioctl/libdm-iface.c:1897 device-mapper: reload ioctl on (252:9) failed: Device or Resource busy
#libdm-deptree.c:993 Removing global_lock-lvmlock (252:9)
#libdm-common.c:2434 Udev cookie 0xd4defd5 (semid 425985) created
#libdm-common.c:2454 Udev cookie 0xd4defd5 (semid 425985) incremented to 1
#libdm-common.c:2326 Udev cookie 0xd4defd5 (semid 425985) incremented to 2
#libdm-common.c:2576 Udev cookie 0xd4defd5 (semid 425985) assigned to REMOVE task(2) with flags SUBSYSTEM_0 (0x100)
#ioctl/libdm-iface.c:1859 dm remove (252:9) [ noopencount flush ] [16384] (*1)
#libdm-common.c:1488 global_lock-lvmlock: Stacking NODE_DEL [verify_udev]
#libdm-deptree.c:2846 <backtrace>
#activate/dev_manager.c:3291 <backtrace>
#activate/dev_manager.c:3331 <backtrace>
#activate/activate.c:1387 <backtrace>
#activate/activate.c:2822 <backtrace>
#mm/memlock.c:638 Leaving section (activated).
#activate/activate.c:2858 <backtrace>
#locking/locking.c:275 <backtrace>
#locking/locking.c:352 <backtrace>
#metadata/lv.c:1513 <backtrace>
#metadata/lv_manip.c:7894 Failed to activate new LV.
#locking/file_locking.c:95 Locking LV 9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y (NL)
#activate/activate.c:2633 Deactivating global_lock/lvmlock.
#activate/dev_manager.c:779 Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859 dm info LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ] [16384] (*1)
#metadata/pv_manip.c:417 /dev/mapper/mpatha 0: 0 5117: NULL(0:0)
#locking/locking.c:331 Dropping cache for global_lock.
#mm/memlock.c:594 Unlock: Memlock counters: prioritized:1 locked:0 critical:0 daemon:0 suspended:0
#mm/memlock.c:502 Restoring original task priority 0.
#format_text/format-text.c:331 Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:678 Writing metadata for VG global_lock to /dev/mapper/mpatha at 68608 len 721 (wrap 0)
#format_text/format-text.c:331 Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:790 Pre-Committing global_lock metadata (3) to /dev/mapper/mpatha header at 65536
#metadata/vg.c:68 Allocated VG global_lock at 0xaaae10f6b640.
#cache/lvmetad.c:1182 Sending lvmetad pending VG global_lock (seqno 3)
#format_text/format-text.c:331 Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:790 Committing global_lock metadata (3) to /dev/mapper/mpatha header at 65536
#locking/locking.c:331 Dropping cache for global_lock.
#metadata/vg.c:83 Freeing VG global_lock at 0xaaae10f48df0.
#mm/memlock.c:594 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0
#format_text/archiver.c:576 Creating volume group backup "/etc/lvm/backup/global_lock" (seqno 3).
#format_text/format-text.c:999 Writing global_lock metadata to /etc/lvm/backup/.lvm_kunpeng03_1875425_2034702906
#format_text/format-text.c:1018 Renaming /etc/lvm/backup/.lvm_kunpeng03_1875425_2034702906 to /etc/lvm/backup/global_lock.tmp
#format_text/format-text.c:1043 Committing global_lock metadata (3)
#format_text/format-text.c:1044 Renaming /etc/lvm/backup/global_lock.tmp to /etc/lvm/backup/global_lock
#metadata/lv_manip.c:8083 <backtrace>
#locking/lvmlockd.c:355 Failed to create sanlock lv lvmlock in vg global_lock
#locking/lvmlockd.c:639 Failed to create internal lv.
#vgcreate.c:189 Failed to initialize lock args for lock type sanlock
#cache/lvmetad.c:1308 Sending lvmetad pending remove VG global_lock
#format_text/format-text.c:331 Reading mda header sector from /dev/mapper/mpatha at 65536
#metadata/metadata.c:562 Removing physical volume "/dev/mapper/mpatha" from volume group "global_lock"
#device/dev-io.c:336 /dev/mapper/mpatha: using cached size 41943040 sectors
#cache/lvmcache.c:2080 lvmcache /dev/mapper/mpatha: now in VG #orphans_lvm2 (#orphans_lvm2) with 1 mda(s).
#format_text/format-text.c:1460 Creating metadata area on /dev/mapper/mpatha at sector 128 size 22400 sectors
#format_text/text_label.c:184 /dev/mapper/mpatha: Preparing PV label header gwNKdu-c2NH-y2Ni-kJ6E-pDvg-jbZx-A0oJst size 21474836480 with da1 (22528s, 0s) mda1 (128s, 22400s)
#label/label.c:202 /dev/mapper/mpatha: Writing label to sector 1 with stored offset 32.
#format_text/format-text.c:331 Reading mda header sector from /dev/mapper/mpatha at 65536
#cache/lvmetad.c:1671 Telling lvmetad to store PV /dev/mapper/mpatha (gwNKdu-c2NH-y2Ni-kJ6E-pDvg-jbZx-A0oJst)
#cache/lvmetad.c:1338 Telling lvmetad to remove VGID 9ivzxu-QSyb-M0Lk-7Nd0-BkBI-eWgw-avYrYf (global_lock)
#metadata/metadata.c:592 Volume group "global_lock" successfully removed
#vgcreate.c:192 <backtrace>
#mm/memlock.c:594 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0
#activate/fs.c:491 Syncing device names
#libdm-common.c:2361 Udev cookie 0xd4defd5 (semid 425985) decremented to 0
#libdm-common.c:2650 Udev cookie 0xd4defd5 (semid 425985) waiting for zero
#libdm-common.c:2376 Udev cookie 0xd4defd5 (semid 425985) destroyed
#libdm-common.c:1488 global_lock-lvmlock: Processing NODE_DEL [verify_udev]
#locking/locking.c:331 Dropping cache for global_lock.
#misc/lvm-flock.c:70 Unlocking /run/lock/lvm/V_global_lock
#misc/lvm-flock.c:47 _undo_flock /run/lock/lvm/V_global_lock
#cache/lvmcache.c:751 lvmcache has no info for vgname "global_lock".
#locking/locking.c:331 Dropping cache for #orphans.
#misc/lvm-flock.c:70 Unlocking /run/lock/lvm/P_orphans
#misc/lvm-flock.c:47 _undo_flock /run/lock/lvm/P_orphans
#cache/lvmcache.c:751 lvmcache has no info for vgname "#orphans".
#metadata/vg.c:83 Freeing VG global_lock at 0xaaae10f6b640.
#metadata/vg.c:83 Freeing VG global_lock at 0xaaae10f40dd0.
#daemon-client.c:179 Closing daemon socket (fd 4).
#cache/lvmcache.c:2535 Dropping VG info
#cache/lvmcache.c:751 lvmcache has no info for vgname "#orphans_lvm2" with VGID #orphans_lvm2.
#cache/lvmcache.c:751 lvmcache has no info for vgname "#orphans_lvm2".
#cache/lvmcache.c:2082 lvmcache: Initialised VG #orphans_lvm2.
#lvmcmdline.c:3042 Completed: vgcreate global_lock /dev/mapper/mpatha --shared --metadatasize 10M -vvvv
strace:
renameat(AT_FDCWD, "/etc/lvm/backup/.lvm_kunpeng03_3245247_306622229", AT_FDCWD, "/etc/lvm/backup/global_lock.tmp") = 0
renameat(AT_FDCWD, "/etc/lvm/backup/global_lock.tmp", AT_FDCWD, "/etc/lvm/backup/global_lock") = 0
newfstatat(AT_FDCWD, "/etc/lvm/backup/global_lock.tmp", 0xfffffe2c3bb0, 0) = -1 ENOENT
openat(AT_FDCWD, "/etc/lvm/backup", O_RDONLY) = 34
fsync(34) = 0
close(34) = 0
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG}) = -1 ENXIO
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG}) = -1 ENXIO
getpriority(PRIO_PROCESS, 0) = 20
setpriority(PRIO_PROCESS, 0, -18) = 0
semctl(0, 0, SEM_INFO, 0xfffffe2c3990) = 0
faccessat(AT_FDCWD, "/run/udev/control", F_OK) = 0
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG}) = -1 ENXIO
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-real", flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-real", flags=DM_EXISTS_FLAG}) = -1 ENXIO
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-cow", flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-cow", flags=DM_EXISTS_FLAG}) = -1 ENXIO
ioctl(17, DM_LIST_VERSIONS, {version=4.1.0, data_size=16384, data_start=312, flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=431, data_start=312, flags=DM_EXISTS_FLAG, ...}) = 0
ioctl(17, DM_LIST_VERSIONS, {version=4.1.0, data_size=16384, data_start=312, flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=431, data_start=312, flags=DM_EXISTS_FLAG, ...}) = 0
newfstatat(AT_FDCWD, "/dev/mapper/mpatha", {st_mode=S_IFBLK|0660, st_rdev=makedev(0xfc, 0x5), ...}, 0) = 0
newfstatat(AT_FDCWD, "/dev/mapper/mpatha", {st_mode=S_IFBLK|0660, st_rdev=makedev(0xfc, 0x5), ...}, 0) = 0
ioctl(17, DM_TABLE_DEPS, {version=4.0.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x5), flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG} => {version=4.39.0, data_size=336, data_start=312, dev=makedev(0xfc, 0x5), name="mpatha", uuid="mpath-360014052ed4ec784c214f28abde3eb88", target_count=1, open_count=1, event_nr=0, flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_ACTIVE_PRESENT_FLAG, ...}) = 0
ioctl(17, DM_DEV_CREATE, {version=4.0.0, data_size=16384, name="global_lock-lvmlock", uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG} => {version=4.39.0, data_size=305, dev=makedev(0xfc, 0x9), name="global_lock-lvmlock", uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", target_count=0, open_count=0, event_nr=0, flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG}) = 0
ioctl(17, DM_TABLE_STATUS, {version=4.0.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x9), flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_STATUS_TABLE_FLAG} => {version=4.39.0, data_size=305, data_start=312, dev=makedev(0xfc, 0x9), name="global_lock-lvmlock", uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", target_count=0, open_count=0, event_nr=0, flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_STATUS_TABLE_FLAG}) = 0
ioctl(17, DM_TABLE_LOAD, {version=4.0.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x9), target_count=1, flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_SKIP_BDGET_FLAG, ...} => {version=4.39.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x9), flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_SKIP_BDGET_FLAG}) = -1 EBUSY (Device or resource busy)
openat(AT_FDCWD, "/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 34
fstat(34, {st_mode=S_IFREG|0644, st_size=2997, ...}) = 0
read(34, "# Locale name alias data base.\n#"..., 8192) = 2997
read(34, "", 8192) = 0
close(34) = 0
openat(AT_FDCWD, "/usr/share/locale/zh_CN.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT
openat(AT_FDCWD, "/usr/share/locale/zh_CN.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT
openat(AT_FDCWD, "/usr/share/locale/zh_CN/LC_MESSAGES/libc.mo", O_RDONLY) = 34
fstat(34, {st_mode=S_IFREG|0644, st_size=131494, ...}) = 0
Hi, this looks familiar, but I don't recall seeing it in a long time, so it may have been fixed. Like you, my first suspicion would be interference from udev. You're using an old version of lvm, so it would be interesting if you could see it with a recent version. If so please send the full -vvvv (four v's).
my lvm version:
lvm version LVM version: 2.02.181(2) (2018-08-01) Library version: 1.02.150 (2018-08-01) Driver version: 4.39.0 Configuration: ./configure --enable-lvmlockd-sanlock
A side question - do you actually compile LVM yourself - or you use some distro build ?
As 'just enabling' sanlock is not enough - i.e. udev rules are not enabled by default configuration and the installation of udev rules into your particular system does some know-how (normally 'make install' should be sufficient but there are some corner details - i.e. Debian differs) Definitely full build log provided is big help for local builds.
However this particular error message seems to be suggesting - there is likely already existing DM device with particular name - but just possible with different UUID ?
Would be helpful to post results of :
dmsetup info -c
before execution of any given command.
I compiled this myself
lvm version
LVM version: 2.02.181(2) (2018-08-01)
Library version: 1.02.150 (2018-08-01)
Driver version: 4.39.0
Configuration: ./configure --enable-lvmlockd-sanlock
But, system built-in lvm version has the same problem
[root@kunpeng03 ~]# dmsetup info -c
Name Maj Min Stat Open Targ Event UUID
kunpeng03_SSD-kunpeng03_SSD 252 1 L--w 1 1 0 LVM-eO1T7X3l1vmeu7jEPBBPUp3RVZrdkSgMNxjthGnusCND4lVze1q1Lib2RHdVHd72
data-log 252 6 L--w 1 1 0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmtGdLXQRwZFstd0XQD1UJEAOWXDKSGLjX
mpathb 252 8 L--w 0 1 0 mpath-36001405e880411833f04320aff07ec92
data-arstore 252 5 L--w 0 1 0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzm3Swp3BxJo6jv7TEO27BEbMPit2l0E3nD
mpatha 252 7 L--w 0 1 0 mpath-36001405ff5deb38e75945449db2ba3e0
kunpeng03_HDD-kunpeng03_HDD 252 0 L--w 1 1 0 LVM-lv0mEAITn0dBQTyyW11cpk7Xy4XHgSchx2aHUFckjnlMdEbTcJ5UD08OUH9bxj5s
data-backup 252 4 L--w 1 1 0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmeDsvYWmw4OJ0Sf1b86TsBRpGxAEXQptT
data-datastore2 252 3 L--w 1 1 0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmBWxZnm1xsixA3lY43BYsVoxtuDolXIyl
data-datastore1 252 2 L--w 1 1 0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmy2gGDvUgVdLiW3wlh0S1x7Bz5eSkdhTG
[root@kunpeng03 ~]#
[root@kunpeng03 ~]# vgs
Skipping global lock: lockspace not found or started
VG #PV #LV #SN Attr VSize VFree
data 1 5 0 wz--n- <133.57g 0
kunpeng03_HDD 1 1 0 wz--n- <3.64t 0
kunpeng03_SSD 1 1 0 wz--n- <447.13g 0
[root@kunpeng03 ~]#
[root@kunpeng03 ~]# cd /home/
[root@kunpeng03 home]# ./lvm version
LVM version: 2.02.1812 20180801
Library version: 1.02.150 (2018-08-01)
Driver version: 4.39.0
Configuration: ./configure --build=aarch64-koji-linux-gnu --host=aarch64-koji-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --with-default-dm-run-dir=/run --with-default-run-dir=/run/lvm --with-default-pid-dir=/run --with-default-locking-dir=/run/lock/lvm --with-usrlibdir=/usr/lib64 --enable-fsadm --enable-write_install --with-user= --with-group= --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --enable-pkgconfig --enable-applib --enable-cmdlib --enable-dmeventd --enable-blkid_wiping --enable-python3-bindings --with-cluster=internal --with-clvmd=none --with-udevdir=/usr/lib/udev/rules.d --enable-udev_sync --with-thin=internal --enable-lvmetad --with-thin=internal --enable-lvmpolld --enable-lvmlockd-sanlock --enable-dbus-service --enable-notify-dbus --enable-dmfilemapd
[root@kunpeng03 home]# ./lvm vgcreate global /dev/mapper/mpatha --shared --metadatasize 10M
Enabling sanlock global lock
Physical volume "/dev/mapper/mpatha" successfully created.
device-mapper: reload ioctl on (252:9) failed: Device or Resouce busy
Failed to activate new LV.
Failed to create sanlock lv lvmlock in vg global
Failed to create internal lv.
Failed to initialize lock args for lock type sanlock
Volume group "global" successfully removed
Please compile lvm from the main branch at https://sourceware.org/git/?p=lvm2.git;a=summary and see if that works.
If still applicable for usptream - please feel free to reopen this issue - but for now closing.
When i create an vg use cmd like
vgcreate vg1 /dev/mapper/mpathb --shared --metadatasize 10M
show me the error messages:
-vv
outputs:systemd-udevd debug outputs:
lsblk
:multipath -ll
Am I missing anything?