lvmteam / lvm2

Mirror of upstream LVM2 repository
https://gitlab.com/lvmteam/lvm2
GNU General Public License v2.0
133 stars 72 forks source link

device-mapper: reload ioctl on (252:9) failed: Device or Resource busy #63

Closed ncist2011 closed 2 years ago

ncist2011 commented 2 years ago

When i create an vg use cmd like vgcreate vg1 /dev/mapper/mpathb --shared --metadatasize 10M

show me the error messages:

# vgcreate vg1 /dev/mapper/mpathb --shared --metadatasize 10M
  Enabling sanlock global lock
  Physical volume "/dev/mapper/mpathb" successfully created.
  device-mapper: reload ioctl on  (252:9) failed: Device or Resource busy
  Failed to activate new LV.
  Failed to create sanlock lv lvmlock in vg vg1
  Failed to create internal lv.
  Failed to initialize lock args for lock type sanlock
  Volume group "vg1" successfully removed

-vv outputs:

# vgcreate vg1 /dev/mapper/mpathb --shared --metadatasize 10M -vv
      devices/global_filter not found in config: defaulting to global_filter = [ "a|.*/|" ]
      global/lvmetad_update_wait_time not found in config: defaulting to 10
      devices/filter not found in config: defaulting to filter = [ "a|.*/|" ]
      devices/cache not found in config: defaulting to /etc/lvm/cache/.cache
      metadata/record_lvs_history not found in config: defaulting to 0
      File-based locking selected.
      metadata/pvmetadataignore not found in config: defaulting to 0
      metadata/pvmetadatacopies not found in config: defaulting to 1
      allocation/physical_extent_size not found in config: defaulting to 4096
      metadata/vgmetadatacopies not found in config: defaulting to 0
      /dev/sda: size is 937703088 sectors
      /dev/nbd0: size is 0 sectors
      /dev/nbd8: size is 0 sectors
      /dev/kunpeng03_HDD/kunpeng03_HDD: size is 7814029312 sectors
      /dev/sda1: size is 16777217 sectors
      /dev/kunpeng03_SSD/kunpeng03_SSD: size is 937697280 sectors
      /dev/sda2: size is 4194305 sectors
      /dev/data/datastore1: size is 20971520 sectors
      /dev/sda3: size is 209724417 sectors
      /dev/data/datastore2: size is 83886080 sectors
      /dev/sda4: size is 390649425 sectors
      /dev/data/backup: size is 83886080 sectors
      /dev/sda5: size is 316340881 sectors
      /dev/data/arstore: size is 20971520 sectors
      /dev/data/log: size is 70393856 sectors
      /dev/mapper/mpatha: size is 41943040 sectors
      /dev/mapper/mpathb: size is 7814037168 sectors
      /dev/sdb: size is 468862128 sectors
      /dev/sdb1: size is 2097152 sectors
      /dev/sdb2: size is 2097152 sectors
      /dev/sdb3: size is 16777216 sectors
      /dev/sdb4: size is 167772160 sectors
      /dev/sdb5: size is 280116367 sectors
      /dev/sdc: size is 937703088 sectors
      /dev/nbd1: size is 0 sectors
      /dev/nbd9: size is 0 sectors
      /dev/sdd: size is 7814037168 sectors
      /dev/sdd1: size is 1048576 sectors
      /dev/sdd2: size is 7812988524 sectors
      /dev/sde: size is 7814037168 sectors
      /dev/nbd2: size is 0 sectors
      /dev/nbd10: size is 0 sectors
      /dev/sdf: size is 41943040 sectors
      /dev/sdg: size is 41943040 sectors
      /dev/nbd3: size is 0 sectors
      /dev/nbd11: size is 0 sectors
      /dev/sdh: size is 7814037168 sectors
      /dev/sdi: size is 7814037168 sectors
      /dev/nbd4: size is 0 sectors
      /dev/nbd12: size is 0 sectors
      /dev/nbd5: size is 0 sectors
      /dev/nbd13: size is 0 sectors
      /dev/nbd6: size is 0 sectors
      /dev/nbd14: size is 0 sectors
      /dev/nbd7: size is 0 sectors
      /dev/nbd15: size is 0 sectors
      /dev/sda: No lvm label detected
      /dev/kunpeng03_HDD/kunpeng03_HDD: No lvm label detected
      /dev/sda1: No lvm label detected
      /dev/kunpeng03_SSD/kunpeng03_SSD: No lvm label detected
      /dev/sda2: No lvm label detected
      /dev/data/datastore1: No lvm label detected
      /dev/sda3: No lvm label detected
      /dev/data/datastore2: No lvm label detected
      /dev/sda4: No lvm label detected
      /dev/data/backup: No lvm label detected
      /dev/sda5: No lvm label detected
      /dev/data/arstore: No lvm label detected
      /dev/data/log: No lvm label detected
      Label checksum incorrect on /dev/mapper/mpatha - ignoring
      /dev/mapper/mpatha: No lvm label detected
      Label checksum incorrect on /dev/mapper/mpathb - ignoring
      /dev/mapper/mpathb: No lvm label detected
      /dev/sdb: No lvm label detected
      /dev/sdb1: No lvm label detected
      /dev/sdb2: No lvm label detected
      /dev/sdb3: No lvm label detected
      /dev/sdb4: No lvm label detected
      /dev/sdb5: lvm2 label detected at sector 1
      /dev/sdc: lvm2 label detected at sector 1
      /dev/sdd: No lvm label detected
      /dev/sdd1: No lvm label detected
      /dev/sdd2: No lvm label detected
      /dev/sde: lvm2 label detected at sector 1
    Scanning all devices to update lvmetad.
      /dev/sda: using cached size 937703088 sectors
      /dev/nbd0: using cached size 0 sectors
      /dev/nbd8: using cached size 0 sectors
      /dev/kunpeng03_HDD/kunpeng03_HDD: using cached size 7814029312 sectors
      /dev/kunpeng03_HDD/kunpeng03_HDD: using cached size 7814029312 sectors
    No PV info found on /dev/kunpeng03_HDD/kunpeng03_HDD for PVID .
      Request to drop PV /dev/kunpeng03_HDD/kunpeng03_HDD in lvmetad did not find any matching object.
      /dev/sda1: using cached size 16777217 sectors
      /dev/sda1: using cached size 16777217 sectors
    No PV info found on /dev/sda1 for PVID .
      Request to drop PV /dev/sda1 in lvmetad did not find any matching object.
      /dev/kunpeng03_SSD/kunpeng03_SSD: using cached size 937697280 sectors
      /dev/kunpeng03_SSD/kunpeng03_SSD: using cached size 937697280 sectors
    No PV info found on /dev/kunpeng03_SSD/kunpeng03_SSD for PVID .
      Request to drop PV /dev/kunpeng03_SSD/kunpeng03_SSD in lvmetad did not find any matching object.
      /dev/sda2: using cached size 4194305 sectors
      /dev/sda2: using cached size 4194305 sectors
    No PV info found on /dev/sda2 for PVID .
      Request to drop PV /dev/sda2 in lvmetad did not find any matching object.
      /dev/data/datastore1: using cached size 20971520 sectors
      /dev/data/datastore1: using cached size 20971520 sectors
    No PV info found on /dev/data/datastore1 for PVID .
      Request to drop PV /dev/data/datastore1 in lvmetad did not find any matching object.
      /dev/sda3: using cached size 209724417 sectors
      /dev/sda3: using cached size 209724417 sectors
    No PV info found on /dev/sda3 for PVID .
      Request to drop PV /dev/sda3 in lvmetad did not find any matching object.
      /dev/data/datastore2: using cached size 83886080 sectors
      /dev/data/datastore2: using cached size 83886080 sectors
    No PV info found on /dev/data/datastore2 for PVID .
      Request to drop PV /dev/data/datastore2 in lvmetad did not find any matching object.
      /dev/sda4: using cached size 390649425 sectors
      /dev/sda4: using cached size 390649425 sectors
    No PV info found on /dev/sda4 for PVID .
      Request to drop PV /dev/sda4 in lvmetad did not find any matching object.
      /dev/data/backup: using cached size 83886080 sectors
      /dev/data/backup: using cached size 83886080 sectors
    No PV info found on /dev/data/backup for PVID .
      Request to drop PV /dev/data/backup in lvmetad did not find any matching object.
      /dev/sda5: using cached size 316340881 sectors
      /dev/sda5: using cached size 316340881 sectors
    No PV info found on /dev/sda5 for PVID .
      Request to drop PV /dev/sda5 in lvmetad did not find any matching object.
      /dev/data/arstore: using cached size 20971520 sectors
      /dev/data/arstore: using cached size 20971520 sectors
    No PV info found on /dev/data/arstore for PVID .
      Request to drop PV /dev/data/arstore in lvmetad did not find any matching object.
      /dev/data/log: using cached size 70393856 sectors
      /dev/data/log: using cached size 70393856 sectors
    No PV info found on /dev/data/log for PVID .
      Request to drop PV /dev/data/log in lvmetad did not find any matching object.
      /dev/mapper/mpatha: using cached size 41943040 sectors
      /dev/mapper/mpatha: using cached size 41943040 sectors
    No PV info found on /dev/mapper/mpatha for PVID .
      Request to drop PV /dev/mapper/mpatha in lvmetad did not find any matching object.
      /dev/mapper/mpathb: using cached size 7814037168 sectors
      /dev/mapper/mpathb: using cached size 7814037168 sectors
    No PV info found on /dev/mapper/mpathb for PVID .
      Request to drop PV /dev/mapper/mpathb in lvmetad did not find any matching object.
      /dev/sdb: using cached size 468862128 sectors
      /dev/sdb1: using cached size 2097152 sectors
      /dev/sdb1: using cached size 2097152 sectors
    No PV info found on /dev/sdb1 for PVID .
      Request to drop PV /dev/sdb1 in lvmetad did not find any matching object.
      /dev/sdb2: using cached size 2097152 sectors
      /dev/sdb2: using cached size 2097152 sectors
    No PV info found on /dev/sdb2 for PVID .
      Request to drop PV /dev/sdb2 in lvmetad did not find any matching object.
      /dev/sdb3: using cached size 16777216 sectors
      /dev/sdb3: using cached size 16777216 sectors
    No PV info found on /dev/sdb3 for PVID .
      Request to drop PV /dev/sdb3 in lvmetad did not find any matching object.
      /dev/sdb4: using cached size 167772160 sectors
      /dev/sdb4: using cached size 167772160 sectors
    No PV info found on /dev/sdb4 for PVID .
      Request to drop PV /dev/sdb4 in lvmetad did not find any matching object.
      /dev/sdb5: using cached size 280116367 sectors
      /dev/sdb5: using cached size 280116367 sectors
      /dev/sdc: using cached size 937703088 sectors
      /dev/sdc: using cached size 937703088 sectors
      /dev/nbd1: using cached size 0 sectors
      /dev/nbd9: using cached size 0 sectors
      /dev/sdd: using cached size 7814037168 sectors
      /dev/sdd1: using cached size 1048576 sectors
      /dev/sdd1: using cached size 1048576 sectors
    No PV info found on /dev/sdd1 for PVID .
      Request to drop PV /dev/sdd1 in lvmetad did not find any matching object.
      /dev/sdd2: using cached size 7812988524 sectors
      /dev/sdd2: using cached size 7812988524 sectors
    No PV info found on /dev/sdd2 for PVID .
      Request to drop PV /dev/sdd2 in lvmetad did not find any matching object.
      /dev/sde: using cached size 7814037168 sectors
      /dev/sde: using cached size 7814037168 sectors
      /dev/nbd2: using cached size 0 sectors
      /dev/nbd10: using cached size 0 sectors
      /dev/sdf: using cached size 41943040 sectors
      /dev/sdg: using cached size 41943040 sectors
      /dev/nbd3: using cached size 0 sectors
      /dev/nbd11: using cached size 0 sectors
      /dev/sdh: using cached size 7814037168 sectors
      /dev/sdi: using cached size 7814037168 sectors
      /dev/nbd4: using cached size 0 sectors
      /dev/nbd12: using cached size 0 sectors
      /dev/nbd5: using cached size 0 sectors
      /dev/nbd13: using cached size 0 sectors
      /dev/nbd6: using cached size 0 sectors
      /dev/nbd14: using cached size 0 sectors
      /dev/nbd7: using cached size 0 sectors
      /dev/nbd15: using cached size 0 sectors
  Enabling sanlock global lock
      Locking /run/lock/lvm/V_vg1 WB
      Request to lookup VG vg1 in lvmetad did not find any matching object.
      Unlocking /run/lock/lvm/V_vg1
      report/output_format not found in config: defaulting to basic
      log/report_command_log not found in config: defaulting to 0
      Locking /run/lock/lvm/P_orphans WB
      /dev/mapper/mpathb: size is 7814037168 sectors
      /dev/mapper/mpathb: using cached size 7814037168 sectors
      /dev/sda: size is 937703088 sectors
      /dev/nbd0: size is 0 sectors
      /dev/nbd8: size is 0 sectors
      /dev/kunpeng03_HDD/kunpeng03_HDD: size is 7814029312 sectors
      /dev/kunpeng03_HDD/kunpeng03_HDD: using cached size 7814029312 sectors
      /dev/sda1: size is 16777217 sectors
      /dev/sda1: using cached size 16777217 sectors
      /dev/kunpeng03_SSD/kunpeng03_SSD: size is 937697280 sectors
      /dev/kunpeng03_SSD/kunpeng03_SSD: using cached size 937697280 sectors
      /dev/sda2: size is 4194305 sectors
      /dev/sda2: using cached size 4194305 sectors
      /dev/data/datastore1: size is 20971520 sectors
      /dev/data/datastore1: using cached size 20971520 sectors
      /dev/sda3: size is 209724417 sectors
      /dev/sda3: using cached size 209724417 sectors
      /dev/data/datastore2: size is 83886080 sectors
      /dev/data/datastore2: using cached size 83886080 sectors
      /dev/sda4: size is 390649425 sectors
      /dev/sda4: using cached size 390649425 sectors
      /dev/data/backup: size is 83886080 sectors
      /dev/data/backup: using cached size 83886080 sectors
      /dev/sda5: size is 316340881 sectors
      /dev/sda5: using cached size 316340881 sectors
      /dev/data/arstore: size is 20971520 sectors
      /dev/data/arstore: using cached size 20971520 sectors
      /dev/data/log: size is 70393856 sectors
      /dev/data/log: using cached size 70393856 sectors
      /dev/mapper/mpatha: size is 41943040 sectors
      /dev/mapper/mpatha: using cached size 41943040 sectors
      /dev/mapper/mpathb: using cached size 7814037168 sectors
      /dev/mapper/mpathb: using cached size 7814037168 sectors
      /dev/sdb: size is 468862128 sectors
      /dev/sdb1: size is 2097152 sectors
      /dev/sdb1: using cached size 2097152 sectors
      /dev/sdb2: size is 2097152 sectors
      /dev/sdb2: using cached size 2097152 sectors
      /dev/sdb3: size is 16777216 sectors
      /dev/sdb3: using cached size 16777216 sectors
      /dev/sdb4: size is 167772160 sectors
      /dev/sdb4: using cached size 167772160 sectors
      /dev/sdb5: size is 280116367 sectors
      /dev/sdb5: using cached size 280116367 sectors
      /dev/sdc: size is 937703088 sectors
      /dev/sdc: using cached size 937703088 sectors
      /dev/nbd1: size is 0 sectors
      /dev/nbd9: size is 0 sectors
      /dev/sdd: size is 7814037168 sectors
      /dev/sdd1: size is 1048576 sectors
      /dev/sdd1: using cached size 1048576 sectors
      /dev/sdd2: size is 7812988524 sectors
      /dev/sdd2: using cached size 7812988524 sectors
      /dev/sde: size is 7814037168 sectors
      /dev/sde: using cached size 7814037168 sectors
      /dev/nbd2: size is 0 sectors
      /dev/nbd10: size is 0 sectors
      /dev/sdf: size is 41943040 sectors
      /dev/sdg: size is 41943040 sectors
      /dev/nbd3: size is 0 sectors
      /dev/nbd11: size is 0 sectors
      /dev/sdh: size is 7814037168 sectors
      /dev/sdi: size is 7814037168 sectors
      /dev/nbd4: size is 0 sectors
      /dev/nbd12: size is 0 sectors
      /dev/nbd5: size is 0 sectors
      /dev/nbd13: size is 0 sectors
      /dev/nbd6: size is 0 sectors
      /dev/nbd14: size is 0 sectors
      /dev/nbd7: size is 0 sectors
      /dev/nbd15: size is 0 sectors
      Locking /run/lock/lvm/V_kunpeng03_SSD RB
      Reading VG kunpeng03_SSD eO1T7X-3l1v-meu7-jEPB-BPUp-3RVZ-rdkSgM
      /dev/sdc: using cached size 937703088 sectors
      Processing PV /dev/sdc in VG kunpeng03_SSD.
      Unlocking /run/lock/lvm/V_kunpeng03_SSD
      Locking /run/lock/lvm/V_kunpeng03_HDD RB
      Reading VG kunpeng03_HDD lv0mEA-ITn0-dBQT-yyW1-1cpk-7Xy4-XHgSch
      /dev/sde: using cached size 7814037168 sectors
      Processing PV /dev/sde in VG kunpeng03_HDD.
      Unlocking /run/lock/lvm/V_kunpeng03_HDD
      Locking /run/lock/lvm/V_data RB
      Reading VG data fOPUby-tUCX-Klt6-3bN5-mR3J-YKZF-NjHwzm
      /dev/sdb5: using cached size 280116367 sectors
      Processing PV /dev/sdb5 in VG data.
      Unlocking /run/lock/lvm/V_data
      Locking #orphans_lvm2 already done
      Reading VG #orphans_lvm2
      Unlocking /run/lock/lvm/P_orphans
      Locking /run/lock/lvm/P_orphans WB
      Reading VG #orphans_lvm2
      Processing device /dev/kunpeng03_HDD/kunpeng03_HDD.
      Processing device /dev/sda1.
      Processing device /dev/kunpeng03_SSD/kunpeng03_SSD.
      Processing device /dev/sda2.
      Processing device /dev/data/datastore1.
      Processing device /dev/sda3.
      Processing device /dev/data/datastore2.
      Processing device /dev/sda4.
      Processing device /dev/data/backup.
      Processing device /dev/sda5.
      Processing device /dev/data/arstore.
      Processing device /dev/data/log.
      Processing device /dev/mapper/mpatha.
      Processing device /dev/mapper/mpathb.
      Processing device /dev/sdb1.
      Processing device /dev/sdb2.
      Processing device /dev/sdb3.
      Processing device /dev/sdb4.
      Processing device /dev/sdd1.
      Processing device /dev/sdd2.
      Label checksum incorrect on /dev/mapper/mpathb - ignoring
      /dev/mapper/mpathb: No lvm label detected
    Wiping signatures on new PV /dev/mapper/mpathb.
      /dev/mapper/mpathb: size is 7814037168 sectors
      devices/default_data_alignment not found in config: defaulting to 1
      Device /dev/mapper/mpathb: queue/minimum_io_size is 512 bytes.
      Device /dev/mapper/mpathb: queue/optimal_io_size is 1048576 bytes.
      /dev/mapper/mpathb: Setting PE alignment to 2048 sectors.
      Device /dev/mapper/mpathb: alignment_offset is 0 bytes.
      /dev/mapper/mpathb: Setting PE alignment offset to 0 sectors.
    Set up physical volume for "/dev/mapper/mpathb" with 7814037168 available sectors.
      Scanning for labels to wipe from /dev/mapper/mpathb
      /dev/mapper/mpathb: Wiping label at sector 1
    Zeroing start of device /dev/mapper/mpathb.
    Writing physical volume data to disk "/dev/mapper/mpathb".
      /dev/mapper/mpathb: Writing label to sector 1 with stored offset 32.
  Physical volume "/dev/mapper/mpathb" successfully created.
      Locking /run/lock/lvm/V_vg1 WB
    Adding physical volume '/dev/mapper/mpathb' to volume group 'vg1'
      /dev/mapper/mpathb: using cached size 7814037168 sectors
    Archiving volume group "vg1" metadata (seqno 0).
      /dev/mapper/mpathb: Writing label to sector 1 with stored offset 32.
      global/sanlock_lv_extend not found in config: defaulting to 256
    Creating logical volume lvmlock
      Adding segment of type striped to LV lvmlock.
    Creating volume group backup "/etc/lvm/backup/vg1" (seqno 2).
    Activating logical volume vg1/lvmlock locally.
      vg1/lvmlock is not active
      Locking LV r9XOQynWGcTyxHImqCf10joLQxYsEPWIa3NLJkxqHAEpU12ZbNhNz7G0i312kvVt (R)
    activation/volume_list configuration setting not defined: Checking only host tags for vg1/lvmlock.
      Getting target version for linear
      Found linear target v1.4.0.
      Getting target version for striped
      Found striped target v1.6.0.
    Creating vg1-lvmlock
    Loading table for vg1-lvmlock (252:9).
  device-mapper: reload ioctl on  (252:9) failed: 设备或资源忙
    Removing vg1-lvmlock (252:9)
  Failed to activate new LV.
      Locking LV r9XOQynWGcTyxHImqCf10joLQxYsEPWIa3NLJkxqHAEpU12ZbNhNz7G0i312kvVt (NL)
    Creating volume group backup "/etc/lvm/backup/vg1" (seqno 3).
  Failed to create sanlock lv lvmlock in vg vg1
  Failed to create internal lv.
  Failed to initialize lock args for lock type sanlock
    Removing physical volume "/dev/mapper/mpathb" from volume group "vg1"
      /dev/mapper/mpathb: using cached size 7814037168 sectors
      /dev/mapper/mpathb: Writing label to sector 1 with stored offset 32.
  Volume group "vg1" successfully removed
      Unlocking /run/lock/lvm/V_vg1
      Unlocking /run/lock/lvm/P_orphans

systemd-udevd debug outputs:

1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-8: Inotify event: 8 for /dev/dm-8
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-8: device is closed, synthesising 'change' on /sys/devices/virtual/block/dm-8
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-8: Device (SEQNUM=6644, ACTION=change) is queued
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: Validate module index
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: Check if link configuration needs reloading.
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: Successfully forked off 'n/a' as PID 149509.
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-8: Worker [149509] is forked for processing SEQNUM=6644.
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Processing device (SEQNUM=6644, ACTION=change)
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Removing watch
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/10-dm.rules:135 LINK 'mapper/mpathb'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:35 Running PROGRAM '/sbin/multipath -U dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Starting '/sbin/multipath -U dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: Successfully forked off '(spawn)' as PID 149510.
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Process '/sbin/multipath -U dm-8' succeeded.
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:105 Importing properties from results of 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Starting 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: Successfully forked off '(spawn)' as PID 149511.
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'(out) 'DM_TYPE=scsi'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'(out) 'DM_WWN=0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'(out) 'DM_SERIAL=36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Process 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224' succeeded.
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:108 LINK 'disk/by-id/scsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:109 LINK 'disk/by-id/wwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/13-dm-disk.rules:17 LINK 'disk/by-id/dm-name-mpathb'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/13-dm-disk.rules:18 LINK 'disk/by-id/dm-uuid-mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/13-dm-disk.rules:23 Importing properties from results of builtin command 'blkid'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Probe /dev/dm-8 with raid and offset=0
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/66-kpartx.rules:35 RUN '/sbin/kpartx -un /dev/$name'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: /usr/lib/udev/rules.d/69-dm-lvm-metad.rules:38 LINK 'disk/by-id/lvm-pv-uuid-oLss3p-xUZN-W23c-6SKk-dGr5-hMwx-cMEnfY'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Handling device node '/dev/dm-8', devnum=b252:8
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Preserve already existing symlink '/dev/block/252:8' to '../dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Creating symlink '/dev/disk/by-id/lvm-pv-uuid-oLss3p-xUZN-W23c-6SKk-dGr5-hMwx-cMEnfY' to '../../dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fwwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b8:128' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fwwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b8:112' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fwwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Preserve already existing symlink '/dev/disk/by-id/wwn-0x6001405413641b4dc704451b2fc59224' to '../../dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fmapper\x2fmpathb'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Preserve already existing symlink '/dev/mapper/mpathb' to '../dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fdm-name-mpathb'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Preserve already existing symlink '/dev/disk/by-id/dm-name-mpathb' to '../../dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fscsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b8:128' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fscsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b8:112' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fscsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Preserve already existing symlink '/dev/disk/by-id/scsi-36001405413641b4dc704451b2fc59224' to '../../dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fdm-uuid-mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Preserve already existing symlink '/dev/disk/by-id/dm-uuid-mpath-36001405413641b4dc704451b2fc59224' to '../../dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: sd-device: Created db file '/run/udev/data/b252:8' for '/devices/virtual/block/dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Running command "/sbin/kpartx -un /dev/dm-8"
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: dm-8: Starting '/sbin/kpartx -un /dev/dm-8'
1月 10 11:31:16 kunpeng03 systemd-udevd[149509]: Successfully forked off '(spawn)' as PID 149513.
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: 252:9: Device (SEQNUM=6645, ACTION=add) is queued
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: Successfully forked off 'n/a' as PID 149515.
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: 252:9: Worker [149515] is forked for processing SEQNUM=6645.
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-9: Device (SEQNUM=6646, ACTION=add) is queued
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: 252:9: Processing device (SEQNUM=6645, ACTION=add)
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: 252:9: /usr/lib/udev/rules.d/40-kylin.rules:8 Running PROGRAM '/bin/uname -p'
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: 252:9: Starting '/bin/uname -p'
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: Successfully forked off 'n/a' as PID 149516.
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: Successfully forked off '(spawn)' as PID 149517.
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-9: Worker [149516] is forked for processing SEQNUM=6646.
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: Processing device (SEQNUM=6646, ACTION=add)
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: /usr/lib/udev/rules.d/40-kylin.rules:8 Running PROGRAM '/bin/uname -p'
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: Starting '/bin/uname -p'
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: Successfully forked off '(spawn)' as PID 149518.
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: 252:9: '/bin/uname -p'(out) 'aarch64'
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: 252:9: Process '/bin/uname -p' succeeded.
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: '/bin/uname -p'(out) 'aarch64'
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: Process '/bin/uname -p' succeeded.
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: 252:9: Device (SEQNUM=6645, ACTION=add) processed
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: 252:9: sd-device-monitor: Passed 153 byte to netlink monitor
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: /usr/lib/udev/rules.d/50-udev-default.rules:62 GROUP 6
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-9: Device (SEQNUM=6647, ACTION=remove) is queued
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: Handling device node '/dev/dm-9', devnum=b252:9
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: Setting permissions /dev/dm-9, uid=0, gid=6, mode=0660
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: Creating symlink '/dev/block/252:9' to '../dm-9'
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: sd-device: Created db file '/run/udev/data/b252:9' for '/devices/virtual/block/dm-9'
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: Device (SEQNUM=6646, ACTION=add) processed
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: dm-9: sd-device-monitor: Passed 344 byte to netlink monitor
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-9: sd-device-monitor: Passed 213 byte to netlink monitor
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: 252:9: Device (SEQNUM=6648, ACTION=remove) is queued
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: Processing device (SEQNUM=6647, ACTION=remove)
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: 252:9: sd-device-monitor: Passed 142 byte to netlink monitor
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: 252:9: Processing device (SEQNUM=6648, ACTION=remove)
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-9: Device (SEQNUM=6649, ACTION=remove) is queued
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: 252:9: Device (SEQNUM=6648, ACTION=remove) processed
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: /usr/lib/udev/rules.d/95-dm-notify.rules:12 RUN '/usr/sbin/dmsetup udevcomplete $env{DM_COOKIE}'
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: Running command "/usr/sbin/dmsetup udevcomplete 21026140"
1月 10 11:31:16 kunpeng03 systemd-udevd[149516]: 252:9: sd-device-monitor: Passed 142 byte to netlink monitor
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: Starting '/usr/sbin/dmsetup udevcomplete 21026140'
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: Successfully forked off '(spawn)' as PID 149519.
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: Process '/usr/sbin/dmsetup udevcomplete 21026140' succeeded.
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: Device (SEQNUM=6647, ACTION=remove) processed
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: sd-device-monitor: Passed 352 byte to netlink monitor
1月 10 11:31:16 kunpeng03 systemd-udevd[1096]: dm-9: sd-device-monitor: Passed 194 byte to netlink monitor
1月 10 11:31:16 kunpeng03 systemd-udevd[149515]: dm-9: Processing device (SEQNUM=6649, ACTION=remove)
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-9: Device (SEQNUM=6649, ACTION=remove) processed
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-9: sd-device-monitor: Passed 223 byte to netlink monitor
1月 10 11:31:17 kunpeng03 systemd-udevd[149509]: dm-8: Process '/sbin/kpartx -un /dev/dm-8' succeeded.
1月 10 11:31:17 kunpeng03 systemd-udevd[149509]: dm-8: Adding watch on '/dev/dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149509]: dm-8: sd-device: Created db file '/run/udev/data/b252:8' for '/devices/virtual/block/dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149509]: dm-8: Device (SEQNUM=6644, ACTION=change) processed
1月 10 11:31:17 kunpeng03 systemd-udevd[149509]: dm-8: sd-device-monitor: Passed 1052 byte to netlink monitor
1月 10 11:31:17 kunpeng03 systemd-udevd[1096]: dm-8: Inotify event: 8 for /dev/dm-8
1月 10 11:31:17 kunpeng03 systemd-udevd[1096]: dm-8: device is closed, synthesising 'change' on /sys/devices/virtual/block/dm-8
1月 10 11:31:17 kunpeng03 systemd-udevd[1096]: dm-8: Device (SEQNUM=6650, ACTION=change) is queued
1月 10 11:31:17 kunpeng03 systemd-udevd[1096]: dm-8: sd-device-monitor: Passed 207 byte to netlink monitor
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Processing device (SEQNUM=6650, ACTION=change)
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Removing watch
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/10-dm.rules:135 LINK 'mapper/mpathb'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:35 Running PROGRAM '/sbin/multipath -U dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Starting '/sbin/multipath -U dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: Successfully forked off '(spawn)' as PID 149520.
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Process '/sbin/multipath -U dm-8' succeeded.
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:105 Importing properties from results of 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Starting 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: Successfully forked off '(spawn)' as PID 149521.
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'(out) 'DM_TYPE=scsi'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'(out) 'DM_WWN=0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224'(out) 'DM_SERIAL=36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Process 'kpartx_id 252 8 mpath-36001405413641b4dc704451b2fc59224' succeeded.
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:108 LINK 'disk/by-id/scsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/11-dm-mpath.rules:109 LINK 'disk/by-id/wwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/13-dm-disk.rules:17 LINK 'disk/by-id/dm-name-mpathb'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/13-dm-disk.rules:18 LINK 'disk/by-id/dm-uuid-mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/13-dm-disk.rules:23 Importing properties from results of builtin command 'blkid'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Probe /dev/dm-8 with raid and offset=0
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/66-kpartx.rules:35 RUN '/sbin/kpartx -un /dev/$name'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: /usr/lib/udev/rules.d/69-dm-lvm-metad.rules:38 LINK 'disk/by-id/lvm-pv-uuid-oLss3p-xUZN-W23c-6SKk-dGr5-hMwx-cMEnfY'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Handling device node '/dev/dm-8', devnum=b252:8
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Preserve already existing symlink '/dev/block/252:8' to '../dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fwwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b8:128' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fwwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b8:112' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fwwn-0x6001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Preserve already existing symlink '/dev/disk/by-id/wwn-0x6001405413641b4dc704451b2fc59224' to '../../dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fdm-uuid-mpath-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Preserve already existing symlink '/dev/disk/by-id/dm-uuid-mpath-36001405413641b4dc704451b2fc59224' to '../../dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2flvm-pv-uuid-oLss3p-xUZN-W23c-6SKk-dGr5-hMwx-cMEnfY'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Preserve already existing symlink '/dev/disk/by-id/lvm-pv-uuid-oLss3p-xUZN-W23c-6SKk-dGr5-hMwx-cMEnfY' to '../../dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fscsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b8:128' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fscsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b8:112' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fscsi-36001405413641b4dc704451b2fc59224'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Preserve already existing symlink '/dev/disk/by-id/scsi-36001405413641b4dc704451b2fc59224' to '../../dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fdm-name-mpathb'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Preserve already existing symlink '/dev/disk/by-id/dm-name-mpathb' to '../../dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Found 'b252:8' claiming '/run/udev/links/\x2fmapper\x2fmpathb'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Preserve already existing symlink '/dev/mapper/mpathb' to '../dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: sd-device: Created db file '/run/udev/data/b252:8' for '/devices/virtual/block/dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Running command "/sbin/kpartx -un /dev/dm-8"
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Starting '/sbin/kpartx -un /dev/dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: Successfully forked off '(spawn)' as PID 149523.
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Process '/sbin/kpartx -un /dev/dm-8' succeeded.
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Adding watch on '/dev/dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: sd-device: Created db file '/run/udev/data/b252:8' for '/devices/virtual/block/dm-8'
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: Device (SEQNUM=6650, ACTION=change) processed
1月 10 11:31:17 kunpeng03 systemd-udevd[149515]: dm-8: sd-device-monitor: Passed 1052 byte to netlink monitor
1月 10 11:31:20 kunpeng03 systemd-udevd[1096]: Cleanup idle workers
1月 10 11:31:20 kunpeng03 systemd-udevd[149509]: Unload module index
1月 10 11:31:20 kunpeng03 systemd-udevd[149515]: Unload module index
1月 10 11:31:20 kunpeng03 systemd-udevd[149509]: Unloaded link configuration context.
1月 10 11:31:20 kunpeng03 systemd-udevd[149515]: Unloaded link configuration context.
1月 10 11:31:20 kunpeng03 systemd-udevd[149516]: Unload module index
1月 10 11:31:20 kunpeng03 systemd-udevd[149516]: Unloaded link configuration context.
1月 10 11:31:20 kunpeng03 systemd-udevd[1096]: Worker [149509] exited
1月 10 11:31:20 kunpeng03 systemd-udevd[1096]: Worker [149515] exited
1月 10 11:31:20 kunpeng03 systemd-udevd[1096]: Worker [149516] exited

lsblk:

# lsblk
NAME                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                             8:0    0 447.1G  0 disk
├─sda1                          8:1    0     8G  0 part
├─sda2                          8:2    0     2G  0 part
├─sda3                          8:3    0   100G  0 part
├─sda4                          8:4    0 186.3G  0 part
└─sda5                          8:5    0 150.9G  0 part
sdb                             8:16   0 223.6G  0 disk
├─sdb1                          8:17   0     1G  0 part  /boot/efi
├─sdb2                          8:18   0     1G  0 part  /boot
├─sdb3                          8:19   0     8G  0 part  [SWAP]
├─sdb4                          8:20   0    80G  0 part  /
└─sdb5                          8:21   0 133.6G  0 part
  ├─data-datastore1           252:2    0    10G  0 lvm   /datastore1
  ├─data-datastore2           252:3    0    40G  0 lvm   /datastore2
  ├─data-backup               252:4    0    40G  0 lvm   /backup
  ├─data-arstore              252:5    0    10G  0 lvm
  └─data-log                  252:6    0  33.6G  0 lvm   /var/log
sdc                             8:32   0 447.1G  0 disk
└─kunpeng03_SSD-kunpeng03_SSD 252:1    0 447.1G  0 lvm   /LOCAL/kunpeng03_SSD
sdd                             8:48   0   3.7T  0 disk
├─sdd1                          8:49   0   512M  0 part
└─sdd2                          8:50   0   3.7T  0 part
sde                             8:64   0   3.7T  0 disk
└─kunpeng03_HDD-kunpeng03_HDD 252:0    0   3.7T  0 lvm   /LOCAL/kunpeng03_HDD
sdf                             8:80   0    20G  0 disk
└─mpatha                      252:7    0    20G  0 mpath
sdg                             8:96   0    20G  0 disk
└─mpatha                      252:7    0    20G  0 mpath
sdh                             8:112  0   3.7T  0 disk
└─mpathb                      252:8    0   3.7T  0 mpath
sdi                             8:128  0   3.7T  0 disk
└─mpathb                      252:8    0   3.7T  0 mpath

multipath -ll

# multipath -ll
mpathb (36001405413641b4dc704451b2fc59224) dm-8 LIO-ORG,md3
size=3.6T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 10:0:0:1 sdh   8:112  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 9:0:0:1  sdi   8:128  active ready running
mpatha (36001405a818eac141a2479c9cfd6d1c4) dm-7 LIO-ORG,md2
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 10:0:0:0 sdg   8:96   active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 9:0:0:0  sdf   8:80   active ready running

Am I missing anything?

teigland commented 2 years ago

Hi, this looks familiar, but I don't recall seeing it in a long time, so it may have been fixed. Like you, my first suspicion would be interference from udev. You're using an old version of lvm, so it would be interesting if you could see it with a recent version. If so please send the full -vvvv (four v's).

ncist2011 commented 2 years ago

Hi, this looks familiar, but I don't recall seeing it in a long time, so it may have been fixed. Like you, my first suspicion would be interference from udev. You're using an old version of lvm, so it would be interesting if you could see it with a recent version. If so please send the full -vvvv (four v's).

my lvm version:

lvm version
  LVM version:     2.02.181(2) (2018-08-01)
  Library version: 1.02.150 (2018-08-01)
  Driver version:  4.39.0
  Configuration:   ./configure --enable-lvmlockd-sanlock
ncist2011 commented 2 years ago

this is the -vvvv outputs:

vgcreate.txt

ncist2011 commented 2 years ago
#metadata/vg.c:68            Allocated VG global_lock at 0xaaae10f48df0.
#format_text/import_vsn1.c:591           Importing logical volume global_lock/lvmlock.
#cache/lvmetad.c:1182          Sending lvmetad pending VG global_lock (seqno 2)
#format_text/format-text.c:331           Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:790           Committing global_lock metadata (2) to /dev/mapper/mpatha header at 65536
#locking/locking.c:331           Dropping cache for global_lock.
#metadata/vg.c:83            Freeing VG global_lock at 0xaaae10f61240.
#mm/memlock.c:594           Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0
#format_text/archiver.c:576       Creating volume group backup "/etc/lvm/backup/global_lock" (seqno 2).
#format_text/format-text.c:999           Writing global_lock metadata to /etc/lvm/backup/.lvm_kunpeng03_1875425_722094501
#format_text/format-text.c:1018          Renaming /etc/lvm/backup/.lvm_kunpeng03_1875425_722094501 to /etc/lvm/backup/global_lock.tmp
#format_text/format-text.c:1043          Committing global_lock metadata (2)
#format_text/format-text.c:1044          Renaming /etc/lvm/backup/global_lock.tmp to /etc/lvm/backup/global_lock
#metadata/lv.c:1511      Activating logical volume global_lock/lvmlock locally.
#activate/dev_manager.c:779           Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859          dm info  LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ]   [16384] (*1)
#activate/dev_manager.c:761           Skipping checks for old devices without LVM- dm uuid prefix (kernel vsn 4 >= 3).
#activate/activate.c:1578        global_lock/lvmlock is not active
#locking/file_locking.c:100         Locking LV 9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y (R)
#activate/activate.c:466       activation/volume_list configuration setting not defined: Checking only host tags for global_lock/lvmlock.
#activate/activate.c:2803          Activating global_lock/lvmlock noscan.
#activate/dev_manager.c:779           Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859          dm info  LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ]   [16384] (*1)
#mm/memlock.c:626           Entering prioritized section (activating).
#mm/memlock.c:489           Raised task priority 0 -> -18.
#activate/dev_manager.c:3225          Creating ACTIVATE tree for global_lock/lvmlock.
#activate/dev_manager.c:779           Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859          dm info  LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ opencount flush ]   [16384] (*1)
#activate/dev_manager.c:779           Getting device info for global_lock-lvmlock-real [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-real].
#ioctl/libdm-iface.c:1859          dm info  LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-real [ opencount flush ]   [16384] (*1)
#activate/dev_manager.c:779           Getting device info for global_lock-lvmlock-cow [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-cow].
#ioctl/libdm-iface.c:1859          dm info  LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y-cow [ opencount flush ]   [16384] (*1)
#activate/dev_manager.c:2869          Adding new LV global_lock/lvmlock to dtree
#libdm-deptree.c:604           Not matched uuid LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y in deptree.
#libdm-deptree.c:604           Not matched uuid LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y in deptree.
#activate/dev_manager.c:2791          Checking kernel supports striped segment type for global_lock/lvmlock
#activate/activate.c:522         Getting target version for linear
#ioctl/libdm-iface.c:1859          dm versions   [ opencount flush ]   [16384] (*1)
#activate/activate.c:559         Found linear target v1.4.0.
#activate/activate.c:522         Getting target version for striped
#ioctl/libdm-iface.c:1859          dm versions   [ opencount flush ]   [16384] (*1)
#activate/activate.c:559         Found striped target v1.6.0.
#ioctl/libdm-iface.c:1859          dm deps   (252:7) [ opencount flush ]   [16384] (*1)
#libdm-deptree.c:1944      Creating global_lock-lvmlock
#ioctl/libdm-iface.c:1859          dm create global_lock-lvmlock LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ]   [16384] (*1)
#libdm-deptree.c:2696      Loading table for global_lock-lvmlock (252:9).
#libdm-deptree.c:2641          Adding target to (252:9): 0 524288 linear 252:7 22528
#ioctl/libdm-iface.c:1859          dm table   (252:9) [ opencount flush ]   [16384] (*1)
#ioctl/libdm-iface.c:1859          dm reload   (252:9) [ noopencount flush ]   [16384] (*1)
#ioctl/libdm-iface.c:1897    device-mapper: reload ioctl on  (252:9) failed: Device or Resource busy
#libdm-deptree.c:993       Removing global_lock-lvmlock (252:9)
#libdm-common.c:2434          Udev cookie 0xd4defd5 (semid 425985) created
#libdm-common.c:2454          Udev cookie 0xd4defd5 (semid 425985) incremented to 1
#libdm-common.c:2326          Udev cookie 0xd4defd5 (semid 425985) incremented to 2
#libdm-common.c:2576          Udev cookie 0xd4defd5 (semid 425985) assigned to REMOVE task(2) with flags SUBSYSTEM_0        (0x100)
#ioctl/libdm-iface.c:1859          dm remove   (252:9) [ noopencount flush ]   [16384] (*1)
#libdm-common.c:1488          global_lock-lvmlock: Stacking NODE_DEL [verify_udev]
#libdm-deptree.c:2846          <backtrace>
#activate/dev_manager.c:3291          <backtrace>
#activate/dev_manager.c:3331          <backtrace>
#activate/activate.c:1387          <backtrace>
#activate/activate.c:2822          <backtrace>
#mm/memlock.c:638           Leaving section (activated).
#activate/activate.c:2858          <backtrace>
#locking/locking.c:275           <backtrace>
#locking/locking.c:352           <backtrace>
#metadata/lv.c:1513          <backtrace>
#metadata/lv_manip.c:7894    Failed to activate new LV.
#locking/file_locking.c:95          Locking LV 9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y (NL)
#activate/activate.c:2633          Deactivating global_lock/lvmlock.
#activate/dev_manager.c:779           Getting device info for global_lock-lvmlock [LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y].
#ioctl/libdm-iface.c:1859          dm info  LVM-9ivzxuQSybM0Lk7Nd0BkBIeWgwavYrYfzUyIigEBzCiTXfiJkHuO9SaOOO0i5E6y [ noopencount flush ]   [16384] (*1)
#metadata/pv_manip.c:417           /dev/mapper/mpatha 0:      0   5117: NULL(0:0)
#locking/locking.c:331           Dropping cache for global_lock.
#mm/memlock.c:594           Unlock: Memlock counters: prioritized:1 locked:0 critical:0 daemon:0 suspended:0
#mm/memlock.c:502           Restoring original task priority 0.
#format_text/format-text.c:331           Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:678           Writing metadata for VG global_lock to /dev/mapper/mpatha at 68608 len 721 (wrap 0)
#format_text/format-text.c:331           Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:790           Pre-Committing global_lock metadata (3) to /dev/mapper/mpatha header at 65536
#metadata/vg.c:68            Allocated VG global_lock at 0xaaae10f6b640.
#cache/lvmetad.c:1182          Sending lvmetad pending VG global_lock (seqno 3)
#format_text/format-text.c:331           Reading mda header sector from /dev/mapper/mpatha at 65536
#format_text/format-text.c:790           Committing global_lock metadata (3) to /dev/mapper/mpatha header at 65536
#locking/locking.c:331           Dropping cache for global_lock.
#metadata/vg.c:83            Freeing VG global_lock at 0xaaae10f48df0.
#mm/memlock.c:594           Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0
#format_text/archiver.c:576       Creating volume group backup "/etc/lvm/backup/global_lock" (seqno 3).
#format_text/format-text.c:999           Writing global_lock metadata to /etc/lvm/backup/.lvm_kunpeng03_1875425_2034702906
#format_text/format-text.c:1018          Renaming /etc/lvm/backup/.lvm_kunpeng03_1875425_2034702906 to /etc/lvm/backup/global_lock.tmp
#format_text/format-text.c:1043          Committing global_lock metadata (3)
#format_text/format-text.c:1044          Renaming /etc/lvm/backup/global_lock.tmp to /etc/lvm/backup/global_lock
#metadata/lv_manip.c:8083          <backtrace>
#locking/lvmlockd.c:355     Failed to create sanlock lv lvmlock in vg global_lock
#locking/lvmlockd.c:639     Failed to create internal lv.
#vgcreate.c:189     Failed to initialize lock args for lock type sanlock
#cache/lvmetad.c:1308          Sending lvmetad pending remove VG global_lock
#format_text/format-text.c:331           Reading mda header sector from /dev/mapper/mpatha at 65536
#metadata/metadata.c:562       Removing physical volume "/dev/mapper/mpatha" from volume group "global_lock"
#device/dev-io.c:336         /dev/mapper/mpatha: using cached size 41943040 sectors
#cache/lvmcache.c:2080          lvmcache /dev/mapper/mpatha: now in VG #orphans_lvm2 (#orphans_lvm2) with 1 mda(s).
#format_text/format-text.c:1460          Creating metadata area on /dev/mapper/mpatha at sector 128 size 22400 sectors
#format_text/text_label.c:184           /dev/mapper/mpatha: Preparing PV label header gwNKdu-c2NH-y2Ni-kJ6E-pDvg-jbZx-A0oJst size 21474836480 with da1 (22528s, 0s) mda1 (128s, 22400s)
#label/label.c:202         /dev/mapper/mpatha: Writing label to sector 1 with stored offset 32.
#format_text/format-text.c:331           Reading mda header sector from /dev/mapper/mpatha at 65536
#cache/lvmetad.c:1671          Telling lvmetad to store PV /dev/mapper/mpatha (gwNKdu-c2NH-y2Ni-kJ6E-pDvg-jbZx-A0oJst)
#cache/lvmetad.c:1338          Telling lvmetad to remove VGID 9ivzxu-QSyb-M0Lk-7Nd0-BkBI-eWgw-avYrYf (global_lock)
#metadata/metadata.c:592     Volume group "global_lock" successfully removed
#vgcreate.c:192           <backtrace>
#mm/memlock.c:594           Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0
#activate/fs.c:491           Syncing device names
#libdm-common.c:2361          Udev cookie 0xd4defd5 (semid 425985) decremented to 0
#libdm-common.c:2650          Udev cookie 0xd4defd5 (semid 425985) waiting for zero
#libdm-common.c:2376          Udev cookie 0xd4defd5 (semid 425985) destroyed
#libdm-common.c:1488          global_lock-lvmlock: Processing NODE_DEL [verify_udev]
#locking/locking.c:331           Dropping cache for global_lock.
#misc/lvm-flock.c:70          Unlocking /run/lock/lvm/V_global_lock
#misc/lvm-flock.c:47            _undo_flock /run/lock/lvm/V_global_lock
#cache/lvmcache.c:751           lvmcache has no info for vgname "global_lock".
#locking/locking.c:331           Dropping cache for #orphans.
#misc/lvm-flock.c:70          Unlocking /run/lock/lvm/P_orphans
#misc/lvm-flock.c:47            _undo_flock /run/lock/lvm/P_orphans
#cache/lvmcache.c:751           lvmcache has no info for vgname "#orphans".
#metadata/vg.c:83            Freeing VG global_lock at 0xaaae10f6b640.
#metadata/vg.c:83            Freeing VG global_lock at 0xaaae10f40dd0.
#daemon-client.c:179           Closing daemon socket (fd 4).
#cache/lvmcache.c:2535          Dropping VG info
#cache/lvmcache.c:751           lvmcache has no info for vgname "#orphans_lvm2" with VGID #orphans_lvm2.
#cache/lvmcache.c:751           lvmcache has no info for vgname "#orphans_lvm2".
#cache/lvmcache.c:2082          lvmcache: Initialised VG #orphans_lvm2.
#lvmcmdline.c:3042          Completed: vgcreate global_lock /dev/mapper/mpatha --shared --metadatasize 10M -vvvv
ncist2011 commented 2 years ago

strace:

renameat(AT_FDCWD, "/etc/lvm/backup/.lvm_kunpeng03_3245247_306622229", AT_FDCWD, "/etc/lvm/backup/global_lock.tmp") = 0
renameat(AT_FDCWD, "/etc/lvm/backup/global_lock.tmp", AT_FDCWD, "/etc/lvm/backup/global_lock") = 0
newfstatat(AT_FDCWD, "/etc/lvm/backup/global_lock.tmp", 0xfffffe2c3bb0, 0) = -1 ENOENT 
openat(AT_FDCWD, "/etc/lvm/backup", O_RDONLY) = 34
fsync(34)                               = 0
close(34)                               = 0
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG}) = -1 ENXIO 
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG}) = -1 ENXIO 
getpriority(PRIO_PROCESS, 0)            = 20
setpriority(PRIO_PROCESS, 0, -18)       = 0
semctl(0, 0, SEM_INFO, 0xfffffe2c3990)  = 0
faccessat(AT_FDCWD, "/run/udev/control", F_OK) = 0
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG}) = -1 ENXIO 
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-real", flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-real", flags=DM_EXISTS_FLAG}) = -1 ENXIO 
ioctl(17, DM_DEV_STATUS, {version=4.0.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-cow", flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=16384, uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq-cow", flags=DM_EXISTS_FLAG}) = -1 ENXIO 
ioctl(17, DM_LIST_VERSIONS, {version=4.1.0, data_size=16384, data_start=312, flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=431, data_start=312, flags=DM_EXISTS_FLAG, ...}) = 0
ioctl(17, DM_LIST_VERSIONS, {version=4.1.0, data_size=16384, data_start=312, flags=DM_EXISTS_FLAG} => {version=4.39.0, data_size=431, data_start=312, flags=DM_EXISTS_FLAG, ...}) = 0
newfstatat(AT_FDCWD, "/dev/mapper/mpatha", {st_mode=S_IFBLK|0660, st_rdev=makedev(0xfc, 0x5), ...}, 0) = 0
newfstatat(AT_FDCWD, "/dev/mapper/mpatha", {st_mode=S_IFBLK|0660, st_rdev=makedev(0xfc, 0x5), ...}, 0) = 0
ioctl(17, DM_TABLE_DEPS, {version=4.0.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x5), flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG} => {version=4.39.0, data_size=336, data_start=312, dev=makedev(0xfc, 0x5), name="mpatha", uuid="mpath-360014052ed4ec784c214f28abde3eb88", target_count=1, open_count=1, event_nr=0, flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_ACTIVE_PRESENT_FLAG, ...}) = 0
ioctl(17, DM_DEV_CREATE, {version=4.0.0, data_size=16384, name="global_lock-lvmlock", uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG} => {version=4.39.0, data_size=305, dev=makedev(0xfc, 0x9), name="global_lock-lvmlock", uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", target_count=0, open_count=0, event_nr=0, flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG}) = 0
ioctl(17, DM_TABLE_STATUS, {version=4.0.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x9), flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_STATUS_TABLE_FLAG} => {version=4.39.0, data_size=305, data_start=312, dev=makedev(0xfc, 0x9), name="global_lock-lvmlock", uuid="LVM-wyG94PciWphddM2iohvC2jpjyJhmUPZoLvfmUGGBIgmvaXQbZ8sntv7zmM0oiWlq", target_count=0, open_count=0, event_nr=0, flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_STATUS_TABLE_FLAG}) = 0
ioctl(17, DM_TABLE_LOAD, {version=4.0.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x9), target_count=1, flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_SKIP_BDGET_FLAG, ...} => {version=4.39.0, data_size=16384, data_start=312, dev=makedev(0xfc, 0x9), flags=DM_EXISTS_FLAG|DM_PERSISTENT_DEV_FLAG|DM_SKIP_BDGET_FLAG}) = -1 EBUSY (Device or resource busy)
openat(AT_FDCWD, "/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 34
fstat(34, {st_mode=S_IFREG|0644, st_size=2997, ...}) = 0
read(34, "# Locale name alias data base.\n#"..., 8192) = 2997
read(34, "", 8192)                      = 0
close(34)                               = 0
openat(AT_FDCWD, "/usr/share/locale/zh_CN.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT
openat(AT_FDCWD, "/usr/share/locale/zh_CN.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT
openat(AT_FDCWD, "/usr/share/locale/zh_CN/LC_MESSAGES/libc.mo", O_RDONLY) = 34
fstat(34, {st_mode=S_IFREG|0644, st_size=131494, ...}) = 0
zkabelac commented 2 years ago

Hi, this looks familiar, but I don't recall seeing it in a long time, so it may have been fixed. Like you, my first suspicion would be interference from udev. You're using an old version of lvm, so it would be interesting if you could see it with a recent version. If so please send the full -vvvv (four v's).

my lvm version:

lvm version
  LVM version:     2.02.181(2) (2018-08-01)
  Library version: 1.02.150 (2018-08-01)
  Driver version:  4.39.0
  Configuration:   ./configure --enable-lvmlockd-sanlock

A side question - do you actually compile LVM yourself - or you use some distro build ?

As 'just enabling' sanlock is not enough - i.e. udev rules are not enabled by default configuration and the installation of udev rules into your particular system does some know-how (normally 'make install' should be sufficient but there are some corner details - i.e. Debian differs) Definitely full build log provided is big help for local builds.

However this particular error message seems to be suggesting - there is likely already existing DM device with particular name - but just possible with different UUID ?

Would be helpful to post results of :

dmsetup info -c

before execution of any given command.

ncist2011 commented 2 years ago

I compiled this myself

lvm version
  LVM version:     2.02.181(2) (2018-08-01)
  Library version: 1.02.150 (2018-08-01)
  Driver version:  4.39.0
  Configuration:   ./configure --enable-lvmlockd-sanlock

But, system built-in lvm version has the same problem

[root@kunpeng03 ~]# dmsetup info -c
Name                        Maj Min Stat Open Targ Event  UUID
kunpeng03_SSD-kunpeng03_SSD 252   1 L--w    1    1      0 LVM-eO1T7X3l1vmeu7jEPBBPUp3RVZrdkSgMNxjthGnusCND4lVze1q1Lib2RHdVHd72
data-log                    252   6 L--w    1    1      0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmtGdLXQRwZFstd0XQD1UJEAOWXDKSGLjX
mpathb                      252   8 L--w    0    1      0 mpath-36001405e880411833f04320aff07ec92
data-arstore                252   5 L--w    0    1      0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzm3Swp3BxJo6jv7TEO27BEbMPit2l0E3nD
mpatha                      252   7 L--w    0    1      0 mpath-36001405ff5deb38e75945449db2ba3e0
kunpeng03_HDD-kunpeng03_HDD 252   0 L--w    1    1      0 LVM-lv0mEAITn0dBQTyyW11cpk7Xy4XHgSchx2aHUFckjnlMdEbTcJ5UD08OUH9bxj5s
data-backup                 252   4 L--w    1    1      0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmeDsvYWmw4OJ0Sf1b86TsBRpGxAEXQptT
data-datastore2             252   3 L--w    1    1      0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmBWxZnm1xsixA3lY43BYsVoxtuDolXIyl
data-datastore1             252   2 L--w    1    1      0 LVM-fOPUbytUCXKlt63bN5mR3JYKZFNjHwzmy2gGDvUgVdLiW3wlh0S1x7Bz5eSkdhTG
[root@kunpeng03 ~]#
[root@kunpeng03 ~]# vgs
  Skipping global lock: lockspace not found or started
  VG            #PV #LV #SN Attr   VSize    VFree
  data            1   5   0 wz--n- <133.57g    0
  kunpeng03_HDD   1   1   0 wz--n-   <3.64t    0
  kunpeng03_SSD   1   1   0 wz--n- <447.13g    0
[root@kunpeng03 ~]#
[root@kunpeng03 ~]# cd /home/
[root@kunpeng03 home]# ./lvm version
  LVM version:     2.02.1812 20180801
  Library version: 1.02.150 (2018-08-01)
  Driver version:  4.39.0
  Configuration:   ./configure --build=aarch64-koji-linux-gnu --host=aarch64-koji-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --with-default-dm-run-dir=/run --with-default-run-dir=/run/lvm --with-default-pid-dir=/run --with-default-locking-dir=/run/lock/lvm --with-usrlibdir=/usr/lib64 --enable-fsadm --enable-write_install --with-user= --with-group= --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --enable-pkgconfig --enable-applib --enable-cmdlib --enable-dmeventd --enable-blkid_wiping --enable-python3-bindings --with-cluster=internal --with-clvmd=none --with-udevdir=/usr/lib/udev/rules.d --enable-udev_sync --with-thin=internal --enable-lvmetad --with-thin=internal --enable-lvmpolld --enable-lvmlockd-sanlock --enable-dbus-service --enable-notify-dbus --enable-dmfilemapd

 [root@kunpeng03 home]# ./lvm vgcreate global /dev/mapper/mpatha --shared --metadatasize 10M
  Enabling sanlock global lock
  Physical volume "/dev/mapper/mpatha" successfully created.
  device-mapper: reload ioctl on  (252:9) failed: Device or Resouce busy
  Failed to activate new LV.
  Failed to create sanlock lv lvmlock in vg global
  Failed to create internal lv.
  Failed to initialize lock args for lock type sanlock
  Volume group "global" successfully removed
teigland commented 2 years ago

Please compile lvm from the main branch at https://sourceware.org/git/?p=lvm2.git;a=summary and see if that works.

zkabelac commented 2 years ago

If still applicable for usptream - please feel free to reopen this issue - but for now closing.