siderolabs / talos

Talos Linux is a modern Linux distribution built for Kubernetes.
https://www.talos.dev
Mozilla Public License 2.0
6.47k stars 516 forks source link

Add dependencies for encrypted Ceph OSDs #3129

Closed DWSR closed 1 year ago

DWSR commented 3 years ago

Feature Request

Description

Right now, Ceph/Rook run correctly on top of Talos clusters, but only if the OSDs aren't encrypted. Talos should provide support for encrypted OSDs.

https://docs.ceph.com/en/latest/ceph-volume/lvm/encryption/

cehoffman commented 1 year ago

I'd like to add the requirements for encrypted drives also overlaps with the requirements to run multiple OSDs per device which is needed to the expected performance from NVMe clusters.

smira commented 1 year ago

What exactly is missing?

cehoffman commented 1 year ago

The LVM2 tools in the base Talos image. The OSD prepare step will try to nsenter to the host and execute LVM commands like vgcreate.

https://rook.io/docs/rook/v1.11/Getting-Started/Prerequisites/prerequisites/#lvm-package

frezbo commented 1 year ago

The LVM2 tools in the base Talos image. The OSD prepare step will try to nsenter to the host and execute LVM commands like vgcreate.

https://rook.io/docs/rook/v1.11/Getting-Started/Prerequisites/prerequisites/#lvm-package

vgcreate should already be there I guess, an example here: https://github.com/siderolabs/talos/blob/main/internal/app/machined/pkg/runtime/v1alpha1/v1alpha1_sequencer_tasks.go#L2199

frezbo commented 1 year ago

The LVM2 tools in the base Talos image. The OSD prepare step will try to nsenter to the host and execute LVM commands like vgcreate. https://rook.io/docs/rook/v1.11/Getting-Started/Prerequisites/prerequisites/#lvm-package

vgcreate should already be there I guess, an example here: https://github.com/siderolabs/talos/blob/main/internal/app/machined/pkg/runtime/v1alpha1/v1alpha1_sequencer_tasks.go#L2199

from the docs we already run vgchange

cehoffman commented 1 year ago

I didn't get too deep into the error than seeing it couldn't find some LVM commands in the logs. On searching it seemed to cross over and match this issue as well, https://github.com/rook/rook/issues/12012

Perhaps it is a bug within Rook.

smira commented 1 year ago

It would be nice to provide a short reproducer, e.g. with helm install to trigger the problem. That would help to get the issue resolved.

cehoffman commented 1 year ago

It took a bit to get the system back to state where I could retry multiple OSDs per device, but I got there and encountered the error again. This is the logs from the OSD prepare job where I'm trying to only set the nvme1n1 device to have 4 OSDs.

OSD Prepare Logs ``` 2023-06-26 14:25:45.119991 I | cephcmd: desired devices to configure osds: [{Name:/dev/nvme0n1 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass:nvme InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/nvme1n1 OSDsPerDevice:4 MetadataDevice: DatabaseSizeMB:0 DeviceClass:nvme InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-1 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-2 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-3 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-4 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false}] 2023-06-26 14:25:45.120256 I | rookcmd: starting Rook v1.11.8 with arguments '/rook/rook ceph osd provision' 2023-06-26 14:25:45.120262 I | rookcmd: flag values: --cluster-id=0510c673-2b58-4095-8bf5-036542fc7ffd, --cluster-name=rook-ceph, --data-device-filter=, --data-device-path-filter=, --data-devices=[{"id":"/dev/nvme0n1","storeConfig":{"osdsPerDevice":1,"deviceClass":"nvme"}},{"id":"/dev/nvme1n1","storeConfig":{"osdsPerDevice":4,"deviceClass":"nvme"}},{"id":"/dev/1-1","storeConfig":{"osdsPerDevice":1,"metadataDevice":"/dev/2-1","deviceClass":"hdd"}},{"id":"/dev/1-2","storeConfig":{"osdsPerDevice":1,"metadataDevice":"/dev/2-1","deviceClass":"hdd"}},{"id":"/dev/1-3","storeConfig":{"osdsPerDevice":1,"metadataDevice":"/dev/2-1","deviceClass":"hdd"}},{"id":"/dev/1-4","storeConfig":{"osdsPerDevice":1,"metadataDevice":"/dev/2-1","deviceClass":"hdd"}}], --encrypted-device=false, --force-format=false, --help=false, --location=, --log-level=DEBUG, --metadata-device=, --node-name=nxl03, --osd-crush-device-class=, --osd-crush-initial-weight=, --osd-database-size=0, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false 2023-06-26 14:25:45.120265 I | op-mon: parsing mon endpoints: a=[fd00:beef::5487]:6789,b=[fd00:beef::d482]:6789,c=[fd00:beef::b485]:6789 2023-06-26 14:25:45.126431 I | op-osd: CRUSH location=root=default host=nxl03 2023-06-26 14:25:45.126443 I | cephcmd: crush location of osd: root=default host=nxl03 2023-06-26 14:25:45.126453 D | exec: Running command: dmsetup version 2023-06-26 14:25:45.129040 I | cephosd: Library version: 1.02.181-RHEL8 (2021-10-20) Driver version: 4.47.0 2023-06-26 14:25:45.153613 D | cephclient: No ceph configuration override to merge as "rook-config-override" configmap is empty 2023-06-26 14:25:45.153651 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2023-06-26 14:25:45.153966 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2023-06-26 14:25:45.154181 D | cephclient: config file @ /etc/ceph/ceph.conf: [global] fsid = bda8d74c-6c30-4110-80d9-ab4276750be4 mon initial members = c a b mon host = [v2:[fd00:beef::b485]:3300,v1:[fd00:beef::b485]:6789],[v2:[fd00:beef::5487]:3300,v1:[fd00:beef::5487]:6789],[v2:[fd00:beef::d482]:3300,v1:[fd00:beef::d482]:6789] [client.admin] keyring = /var/lib/rook/rook-ceph/client.admin.keyring 2023-06-26 14:25:45.154198 I | cephosd: discovering hardware 2023-06-26 14:25:45.154211 D | exec: Running command: lsblk --all --noheadings --list --output KNAME 2023-06-26 14:25:45.160333 D | exec: Running command: lsblk /dev/loop0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.166877 D | sys: lsblk output: "SIZE=\"51548160\" ROTA=\"0\" RO=\"1\" TYPE=\"loop\" PKNAME=\"\" NAME=\"/dev/loop0\" KNAME=\"/dev/loop0\" MOUNTPOINT=\"/rootfs\" FSTYPE=\"squashfs\"" 2023-06-26 14:25:45.166913 W | inventory: skipping device "loop0". unsupported diskType loop 2023-06-26 14:25:45.166928 D | exec: Running command: lsblk /dev/loop1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.170469 E | sys: failed to execute lsblk. output: . 2023-06-26 14:25:45.170515 W | inventory: skipping device "loop1". exit status 32 2023-06-26 14:25:45.170528 D | exec: Running command: lsblk /dev/loop2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.174181 E | sys: failed to execute lsblk. output: . 2023-06-26 14:25:45.174209 W | inventory: skipping device "loop2". exit status 32 2023-06-26 14:25:45.174225 D | exec: Running command: lsblk /dev/loop3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.177783 E | sys: failed to execute lsblk. output: . 2023-06-26 14:25:45.177797 W | inventory: skipping device "loop3". exit status 32 2023-06-26 14:25:45.177805 D | exec: Running command: lsblk /dev/loop4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.181308 E | sys: failed to execute lsblk. output: . 2023-06-26 14:25:45.181319 W | inventory: skipping device "loop4". exit status 32 2023-06-26 14:25:45.181327 D | exec: Running command: lsblk /dev/loop5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.184732 E | sys: failed to execute lsblk. output: . 2023-06-26 14:25:45.184740 W | inventory: skipping device "loop5". exit status 32 2023-06-26 14:25:45.184746 D | exec: Running command: lsblk /dev/loop6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.187947 E | sys: failed to execute lsblk. output: . 2023-06-26 14:25:45.187958 W | inventory: skipping device "loop6". exit status 32 2023-06-26 14:25:45.187967 D | exec: Running command: lsblk /dev/loop7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.191437 E | sys: failed to execute lsblk. output: . 2023-06-26 14:25:45.191447 W | inventory: skipping device "loop7". exit status 32 2023-06-26 14:25:45.191454 D | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.198512 D | sys: lsblk output: "SIZE=\"256641603584\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sda\" KNAME=\"/dev/sda\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2023-06-26 14:25:45.198590 D | exec: Running command: sgdisk --print /dev/sda 2023-06-26 14:25:45.209563 D | exec: Running command: udevadm info --query=property /dev/sda 2023-06-26 14:25:45.219346 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nDEVNAME=/dev/sda\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda\nDEVTYPE=disk\nDISKSEQ=11\nID_BUS=usb\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=0\nSUBSYSTEM=block\nUSEC_INITIALIZED=6565242" 2023-06-26 14:25:45.219401 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sda 2023-06-26 14:25:45.222500 I | inventory: skipping device "sda" because it has child, considering the child instead. 2023-06-26 14:25:45.222566 D | exec: Running command: lsblk /dev/sda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.226740 D | sys: lsblk output: "SIZE=\"104857600\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda1\" KNAME=\"/dev/sda1\" MOUNTPOINT=\"\" FSTYPE=\"vfat\"" 2023-06-26 14:25:45.226771 D | exec: Running command: udevadm info --query=property /dev/sda1 2023-06-26 14:25:45.232830 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part1 /dev/disk/by-partuuid/c9c7ec40-6550-4389-9592-664116f5a27a /dev/disk/by-uuid/6494-1D42 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part1 /dev/disk/by-label/EFI /dev/disk/by-partlabel/EFI\nDEVNAME=/dev/sda1\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda1\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_FS_LABEL=EFI\nID_FS_LABEL_ENC=EFI\nID_FS_TYPE=vfat\nID_FS_USAGE=filesystem\nID_FS_UUID=6494-1D42\nID_FS_UUID_ENC=6494-1D42\nID_FS_VERSION=FAT32\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_NAME=EFI\nID_PART_ENTRY_NUMBER=1\nID_PART_ENTRY_OFFSET=2048\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=204800\nID_PART_ENTRY_TYPE=c12a7328-f81f-11d2-ba4b-00a0c93ec93b\nID_PART_ENTRY_UUID=c9c7ec40-6550-4389-9592-664116f5a27a\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=1\nPARTN=1\nPARTNAME=EFI\nSUBSYSTEM=block\nUSEC_INITIALIZED=6632314" 2023-06-26 14:25:45.232869 D | exec: Running command: lsblk /dev/sda2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.236005 D | sys: lsblk output: "SIZE=\"1048576\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda2\" KNAME=\"/dev/sda2\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2023-06-26 14:25:45.236027 D | exec: Running command: udevadm info --query=property /dev/sda2 2023-06-26 14:25:45.242208 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-partuuid/af96884d-7668-46a0-83d7-6add9267db4a /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part2 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part2 /dev/disk/by-partlabel/BIOS\nDEVNAME=/dev/sda2\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda2\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_FLAGS=0x4\nID_PART_ENTRY_NAME=BIOS\nID_PART_ENTRY_NUMBER=2\nID_PART_ENTRY_OFFSET=206848\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=2048\nID_PART_ENTRY_TYPE=21686148-6449-6e6f-744e-656564454649\nID_PART_ENTRY_UUID=af96884d-7668-46a0-83d7-6add9267db4a\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=2\nPARTN=2\nPARTNAME=BIOS\nSUBSYSTEM=block\nUSEC_INITIALIZED=6580869" 2023-06-26 14:25:45.242245 D | exec: Running command: lsblk /dev/sda3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.248235 D | sys: lsblk output: "SIZE=\"1048576000\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda3\" KNAME=\"/dev/sda3\" MOUNTPOINT=\"\" FSTYPE=\"xfs\"" 2023-06-26 14:25:45.248258 D | exec: Running command: udevadm info --query=property /dev/sda3 2023-06-26 14:25:45.254555 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-partuuid/bc37d3a4-5029-45b0-9af1-719c8ac5e1c7 /dev/disk/by-partlabel/BOOT /dev/disk/by-uuid/d5a06eb3-df9f-449b-942f-5990f9a5a7a4 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part3 /dev/disk/by-label/BOOT /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part3\nDEVNAME=/dev/sda3\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda3\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_FS_LABEL=BOOT\nID_FS_LABEL_ENC=BOOT\nID_FS_TYPE=xfs\nID_FS_USAGE=filesystem\nID_FS_UUID=d5a06eb3-df9f-449b-942f-5990f9a5a7a4\nID_FS_UUID_ENC=d5a06eb3-df9f-449b-942f-5990f9a5a7a4\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_NAME=BOOT\nID_PART_ENTRY_NUMBER=3\nID_PART_ENTRY_OFFSET=208896\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=2048000\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_UUID=bc37d3a4-5029-45b0-9af1-719c8ac5e1c7\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=3\nPARTN=3\nPARTNAME=BOOT\nSUBSYSTEM=block\nUSEC_INITIALIZED=6610396" 2023-06-26 14:25:45.254578 D | exec: Running command: lsblk /dev/sda4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.259053 D | sys: lsblk output: "SIZE=\"1048576\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda4\" KNAME=\"/dev/sda4\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2023-06-26 14:25:45.259084 D | exec: Running command: udevadm info --query=property /dev/sda4 2023-06-26 14:25:45.267716 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-partlabel/META /dev/disk/by-partuuid/d16d8c5e-e552-4a79-9adc-e69b64299881 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part4 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part4\nDEVNAME=/dev/sda4\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda4\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_NAME=META\nID_PART_ENTRY_NUMBER=4\nID_PART_ENTRY_OFFSET=2256896\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=2048\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_UUID=d16d8c5e-e552-4a79-9adc-e69b64299881\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=4\nPARTN=4\nPARTNAME=META\nSUBSYSTEM=block\nUSEC_INITIALIZED=6585690" 2023-06-26 14:25:45.267767 D | exec: Running command: lsblk /dev/sda5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.274181 D | sys: lsblk output: "SIZE=\"104857600\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda5\" KNAME=\"/dev/sda5\" MOUNTPOINT=\"/rootfs/system/state\" FSTYPE=\"xfs\"" 2023-06-26 14:25:45.274207 D | exec: Running command: udevadm info --query=property /dev/sda5 2023-06-26 14:25:45.283072 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-label/STATE /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part5 /dev/disk/by-partlabel/STATE /dev/disk/by-uuid/dfed63cf-c7c6-4a13-b193-c16f13fb73df /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part5 /dev/disk/by-partuuid/37e1e641-de57-4a0a-8240-268bda976aab\nDEVNAME=/dev/sda5\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda5\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_FS_LABEL=STATE\nID_FS_LABEL_ENC=STATE\nID_FS_TYPE=xfs\nID_FS_USAGE=filesystem\nID_FS_UUID=dfed63cf-c7c6-4a13-b193-c16f13fb73df\nID_FS_UUID_ENC=dfed63cf-c7c6-4a13-b193-c16f13fb73df\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_NAME=STATE\nID_PART_ENTRY_NUMBER=5\nID_PART_ENTRY_OFFSET=2258944\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=204800\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_UUID=37e1e641-de57-4a0a-8240-268bda976aab\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=5\nPARTN=5\nPARTNAME=STATE\nSUBSYSTEM=block\nUSEC_INITIALIZED=6639058" 2023-06-26 14:25:45.283108 D | exec: Running command: lsblk /dev/sda6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.288920 D | sys: lsblk output: "SIZE=\"255379636224\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda6\" KNAME=\"/dev/sda6\" MOUNTPOINT=\"/rootfs/var\" FSTYPE=\"xfs\"" 2023-06-26 14:25:45.288944 D | exec: Running command: udevadm info --query=property /dev/sda6 2023-06-26 14:25:45.297391 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-partlabel/EPHEMERAL /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part6 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part6 /dev/disk/by-partuuid/b18b93ad-b4a8-4a18-9fbd-131e1b7c5f8f /dev/disk/by-label/EPHEMERAL /dev/disk/by-uuid/81cd20b9-1b29-4a73-b058-826052b71a00\nDEVNAME=/dev/sda6\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda6\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_FS_LABEL=EPHEMERAL\nID_FS_LABEL_ENC=EPHEMERAL\nID_FS_TYPE=xfs\nID_FS_USAGE=filesystem\nID_FS_UUID=81cd20b9-1b29-4a73-b058-826052b71a00\nID_FS_UUID_ENC=81cd20b9-1b29-4a73-b058-826052b71a00\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_NAME=EPHEMERAL\nID_PART_ENTRY_NUMBER=6\nID_PART_ENTRY_OFFSET=2463744\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=498788352\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_UUID=b18b93ad-b4a8-4a18-9fbd-131e1b7c5f8f\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=6\nPARTN=6\nPARTNAME=EPHEMERAL\nSUBSYSTEM=block\nUSEC_INITIALIZED=6639326" 2023-06-26 14:25:45.297446 D | exec: Running command: lsblk /dev/nvme0n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.303291 D | sys: lsblk output: "SIZE=\"1024209543168\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nvme0n1\" KNAME=\"/dev/nvme0n1\" MOUNTPOINT=\"\" FSTYPE=\"ceph_bluestore\"" 2023-06-26 14:25:45.303350 D | exec: Running command: sgdisk --print /dev/nvme0n1 2023-06-26 14:25:45.309360 D | exec: Running command: udevadm info --query=property /dev/nvme0n1 2023-06-26 14:25:45.318191 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1401005Y1P0B-1 /dev/ceph-disks/2-1 /dev/disk/by-id/nvme-eui.5cd2e4732150000e\nDEVNAME=/dev/nvme0n1\nDEVPATH=/devices/pci0000:00/0000:00:1a.0/0000:6d:00.0/nvme/nvme0/nvme0n1\nDEVTYPE=disk\nDISKSEQ=9\nID_FS_TYPE=ceph_bluestore\nID_FS_USAGE=other\nID_MODEL=H20 HBRPEKNL0203A NVMe INTEL 1024GB\nID_SERIAL=H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1401005Y1P0B-1\nID_SERIAL_SHORT=PHPG1401005Y1P0B-1\nID_WWN=eui.5cd2e4732150000e\nMAJOR=259\nMINOR=0\nSUBSYSTEM=block\nUSEC_INITIALIZED=6551824" 2023-06-26 14:25:45.318230 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nvme0n1 2023-06-26 14:25:45.321329 D | exec: Running command: lsblk /dev/nvme1n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.327244 D | sys: lsblk output: "SIZE=\"1024209543168\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nvme1n1\" KNAME=\"/dev/nvme1n1\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2023-06-26 14:25:45.327296 D | exec: Running command: sgdisk --print /dev/nvme1n1 2023-06-26 14:25:45.332719 D | exec: Running command: udevadm info --query=property /dev/nvme1n1 2023-06-26 14:25:45.339741 D | sys: udevadm info output: "DEVLINKS=/dev/ceph-disks/2-2 /dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1 /dev/disk/by-id/nvme-eui.5cd2e47811500761\nDEVNAME=/dev/nvme1n1\nDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:6c:00.0/nvme/nvme1/nvme1n1\nDEVTYPE=disk\nDISKSEQ=10\nID_MODEL=H20 HBRPEKNL0203A NVMe INTEL 1024GB\nID_SERIAL=H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1246003M1P0B-1\nID_SERIAL_SHORT=PHPG1246003M1P0B-1\nID_WWN=eui.5cd2e47811500761\nMAJOR=259\nMINOR=1\nSUBSYSTEM=block\nUSEC_INITIALIZED=6551803" 2023-06-26 14:25:45.339765 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nvme1n1 2023-06-26 14:25:45.341758 D | inventory: discovered disks are: 2023-06-26 14:25:45.341813 D | inventory: &{Name:sda1 Parent:sda HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part1 /dev/disk/by-partuuid/c9c7ec40-6550-4389-9592-664116f5a27a /dev/disk/by-uuid/6494-1D42 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part1 /dev/disk/by-label/EFI /dev/disk/by-partlabel/EFI Size:104857600 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:vfat Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda1 KernelName:sda1 Encrypted:false} 2023-06-26 14:25:45.341834 D | inventory: &{Name:sda2 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partuuid/af96884d-7668-46a0-83d7-6add9267db4a /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part2 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part2 /dev/disk/by-partlabel/BIOS Size:1048576 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda2 KernelName:sda2 Encrypted:false} 2023-06-26 14:25:45.341851 D | inventory: &{Name:sda3 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partuuid/bc37d3a4-5029-45b0-9af1-719c8ac5e1c7 /dev/disk/by-partlabel/BOOT /dev/disk/by-uuid/d5a06eb3-df9f-449b-942f-5990f9a5a7a4 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part3 /dev/disk/by-label/BOOT /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part3 Size:1048576000 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:xfs Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda3 KernelName:sda3 Encrypted:false} 2023-06-26 14:25:45.341869 D | inventory: &{Name:sda4 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partlabel/META /dev/disk/by-partuuid/d16d8c5e-e552-4a79-9adc-e69b64299881 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part4 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part4 Size:1048576 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda4 KernelName:sda4 Encrypted:false} 2023-06-26 14:25:45.341884 D | inventory: &{Name:sda5 Parent:sda HasChildren:false DevLinks:/dev/disk/by-label/STATE /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part5 /dev/disk/by-partlabel/STATE /dev/disk/by-uuid/dfed63cf-c7c6-4a13-b193-c16f13fb73df /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part5 /dev/disk/by-partuuid/37e1e641-de57-4a0a-8240-268bda976aab Size:104857600 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:xfs Mountpoint:state Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda5 KernelName:sda5 Encrypted:false} 2023-06-26 14:25:45.341903 D | inventory: &{Name:sda6 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partlabel/EPHEMERAL /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part6 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part6 /dev/disk/by-partuuid/b18b93ad-b4a8-4a18-9fbd-131e1b7c5f8f /dev/disk/by-label/EPHEMERAL /dev/disk/by-uuid/81cd20b9-1b29-4a73-b058-826052b71a00 Size:255379636224 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:xfs Mountpoint:var Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda6 KernelName:sda6 Encrypted:false} 2023-06-26 14:25:45.341927 D | inventory: &{Name:nvme0n1 Parent: HasChildren:false DevLinks:/dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1401005Y1P0B-1 /dev/ceph-disks/2-1 /dev/disk/by-id/nvme-eui.5cd2e4732150000e Size:1024209543168 UUID:63aa06ab-61c4-47d5-9327-25ea5081c5ac Serial:H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1401005Y1P0B-1 Type:disk Rotational:false Readonly:false Partitions:[] Filesystem:ceph_bluestore Mountpoint: Vendor: Model:H20 HBRPEKNL0203A NVMe INTEL 1024GB WWN:eui.5cd2e4732150000e WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1 KernelName:nvme0n1 Encrypted:false} 2023-06-26 14:25:45.341949 D | inventory: &{Name:nvme1n1 Parent: HasChildren:false DevLinks:/dev/ceph-disks/2-2 /dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1 /dev/disk/by-id/nvme-eui.5cd2e47811500761 Size:1024209543168 UUID:83fd6be2-d251-4896-83d4-371ce3941781 Serial:H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1246003M1P0B-1 Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:H20 HBRPEKNL0203A NVMe INTEL 1024GB WWN:eui.5cd2e47811500761 WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme1n1 KernelName:nvme1n1 Encrypted:false} 2023-06-26 14:25:45.341956 I | cephosd: creating and starting the osds 2023-06-26 14:25:45.341986 D | cephosd: desiredDevices are [{Name:/dev/nvme0n1 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass:nvme InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/nvme1n1 OSDsPerDevice:4 MetadataDevice: DatabaseSizeMB:0 DeviceClass:nvme InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-1 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-2 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-3 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/1-4 OSDsPerDevice:1 MetadataDevice:/dev/2-1 DatabaseSizeMB:0 DeviceClass:hdd InitialWeight: IsFilter:false IsDevicePathFilter:false}] 2023-06-26 14:25:45.341996 D | cephosd: context.Devices are: 2023-06-26 14:25:45.342016 D | cephosd: &{Name:sda1 Parent:sda HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part1 /dev/disk/by-partuuid/c9c7ec40-6550-4389-9592-664116f5a27a /dev/disk/by-uuid/6494-1D42 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part1 /dev/disk/by-label/EFI /dev/disk/by-partlabel/EFI Size:104857600 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:vfat Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda1 KernelName:sda1 Encrypted:false} 2023-06-26 14:25:45.342031 D | cephosd: &{Name:sda2 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partuuid/af96884d-7668-46a0-83d7-6add9267db4a /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part2 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part2 /dev/disk/by-partlabel/BIOS Size:1048576 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda2 KernelName:sda2 Encrypted:false} 2023-06-26 14:25:45.342045 D | cephosd: &{Name:sda3 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partuuid/bc37d3a4-5029-45b0-9af1-719c8ac5e1c7 /dev/disk/by-partlabel/BOOT /dev/disk/by-uuid/d5a06eb3-df9f-449b-942f-5990f9a5a7a4 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part3 /dev/disk/by-label/BOOT /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part3 Size:1048576000 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:xfs Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda3 KernelName:sda3 Encrypted:false} 2023-06-26 14:25:45.342059 D | cephosd: &{Name:sda4 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partlabel/META /dev/disk/by-partuuid/d16d8c5e-e552-4a79-9adc-e69b64299881 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part4 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part4 Size:1048576 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda4 KernelName:sda4 Encrypted:false} 2023-06-26 14:25:45.342072 D | cephosd: &{Name:sda5 Parent:sda HasChildren:false DevLinks:/dev/disk/by-label/STATE /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part5 /dev/disk/by-partlabel/STATE /dev/disk/by-uuid/dfed63cf-c7c6-4a13-b193-c16f13fb73df /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part5 /dev/disk/by-partuuid/37e1e641-de57-4a0a-8240-268bda976aab Size:104857600 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:xfs Mountpoint:state Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda5 KernelName:sda5 Encrypted:false} 2023-06-26 14:25:45.342098 D | cephosd: &{Name:sda6 Parent:sda HasChildren:false DevLinks:/dev/disk/by-partlabel/EPHEMERAL /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part6 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part6 /dev/disk/by-partuuid/b18b93ad-b4a8-4a18-9fbd-131e1b7c5f8f /dev/disk/by-label/EPHEMERAL /dev/disk/by-uuid/81cd20b9-1b29-4a73-b058-826052b71a00 Size:255379636224 UUID: Serial:Samsung_Flash_Drive_FIT_0346222100000240-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:xfs Mountpoint:var Vendor:Samsung Model:Flash_Drive_FIT WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sda6 KernelName:sda6 Encrypted:false} 2023-06-26 14:25:45.342113 D | cephosd: &{Name:nvme0n1 Parent: HasChildren:false DevLinks:/dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1401005Y1P0B-1 /dev/ceph-disks/2-1 /dev/disk/by-id/nvme-eui.5cd2e4732150000e Size:1024209543168 UUID:63aa06ab-61c4-47d5-9327-25ea5081c5ac Serial:H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1401005Y1P0B-1 Type:disk Rotational:false Readonly:false Partitions:[] Filesystem:ceph_bluestore Mountpoint: Vendor: Model:H20 HBRPEKNL0203A NVMe INTEL 1024GB WWN:eui.5cd2e4732150000e WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1 KernelName:nvme0n1 Encrypted:false} 2023-06-26 14:25:45.342137 D | cephosd: &{Name:nvme1n1 Parent: HasChildren:false DevLinks:/dev/ceph-disks/2-2 /dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1 /dev/disk/by-id/nvme-eui.5cd2e47811500761 Size:1024209543168 UUID:83fd6be2-d251-4896-83d4-371ce3941781 Serial:H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1246003M1P0B-1 Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:H20 HBRPEKNL0203A NVMe INTEL 1024GB WWN:eui.5cd2e47811500761 WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme1n1 KernelName:nvme1n1 Encrypted:false} 2023-06-26 14:25:45.342146 I | cephosd: skipping device "sda1" because it contains a filesystem "vfat" 2023-06-26 14:25:45.342152 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2023-06-26 14:25:45.342678 D | exec: Running command: udevadm info --query=property /dev/sda2 2023-06-26 14:25:45.350856 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part2 /dev/disk/by-partlabel/BIOS /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part2 /dev/disk/by-partuuid/af96884d-7668-46a0-83d7-6add9267db4a\nDEVNAME=/dev/sda2\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda2\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_FLAGS=0x4\nID_PART_ENTRY_NAME=BIOS\nID_PART_ENTRY_NUMBER=2\nID_PART_ENTRY_OFFSET=206848\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=2048\nID_PART_ENTRY_TYPE=21686148-6449-6e6f-744e-656564454649\nID_PART_ENTRY_UUID=af96884d-7668-46a0-83d7-6add9267db4a\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=2\nPARTN=2\nPARTNAME=BIOS\nSUBSYSTEM=block\nUSEC_INITIALIZED=6580869" 2023-06-26 14:25:45.350907 D | exec: Running command: lsblk /dev/sda2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.356825 D | sys: lsblk output: "SIZE=\"1048576\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda2\" KNAME=\"/dev/sda2\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2023-06-26 14:25:45.356856 D | exec: Running command: ceph-volume inventory --format json /dev/sda2 2023-06-26 14:25:45.764698 I | cephosd: skipping device "sda2": ["Insufficient space (<5GB)"]. 2023-06-26 14:25:45.764707 I | cephosd: skipping device "sda3" because it contains a filesystem "xfs" 2023-06-26 14:25:45.764710 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2023-06-26 14:25:45.765439 D | exec: Running command: udevadm info --query=property /dev/sda4 2023-06-26 14:25:45.767936 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part4 /dev/disk/by-partlabel/META /dev/disk/by-partuuid/d16d8c5e-e552-4a79-9adc-e69b64299881 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part4\nDEVNAME=/dev/sda4\nDEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-8/2-8:1.0/host5/target5:0:0/5:0:0:0/block/sda/sda4\nDEVTYPE=partition\nDISKSEQ=11\nID_BUS=usb\nID_INSTANCE=0:0\nID_MODEL=Flash_Drive_FIT\nID_MODEL_ENC=Flash\\x20Drive\\x20FIT\\x20\nID_MODEL_ID=1000\nID_PART_ENTRY_DISK=8:0\nID_PART_ENTRY_NAME=META\nID_PART_ENTRY_NUMBER=4\nID_PART_ENTRY_OFFSET=2256896\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=2048\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_UUID=d16d8c5e-e552-4a79-9adc-e69b64299881\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=5e6cebac-824d-4af1-88fb-c16d58f4d913\nID_PATH=pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_00_14_0-usb-0_8_1_0-scsi-0_0_0_0\nID_REVISION=1100\nID_SERIAL=Samsung_Flash_Drive_FIT_0346222100000240-0:0\nID_SERIAL_SHORT=0346222100000240\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_VENDOR=Samsung\nID_VENDOR_ENC=Samsung\\x20\nID_VENDOR_ID=090c\nMAJOR=8\nMINOR=4\nPARTN=4\nPARTNAME=META\nSUBSYSTEM=block\nUSEC_INITIALIZED=6585690" 2023-06-26 14:25:45.767948 D | exec: Running command: lsblk /dev/sda4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:45.769785 D | sys: lsblk output: "SIZE=\"1048576\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sda\" NAME=\"/dev/sda4\" KNAME=\"/dev/sda4\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2023-06-26 14:25:45.769809 D | exec: Running command: ceph-volume inventory --format json /dev/sda4 2023-06-26 14:25:46.135108 I | cephosd: skipping device "sda4": ["Insufficient space (<5GB)"]. 2023-06-26 14:25:46.135119 I | cephosd: skipping device "sda5" with mountpoint "state" 2023-06-26 14:25:46.135121 I | cephosd: skipping device "sda6" with mountpoint "var" 2023-06-26 14:25:46.135123 I | cephosd: skipping device "nvme0n1" because it contains a filesystem "ceph_bluestore" 2023-06-26 14:25:46.135125 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2023-06-26 14:25:46.136473 D | exec: Running command: lsblk /dev/nvme1n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2023-06-26 14:25:46.138251 D | sys: lsblk output: "SIZE=\"1024209543168\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nvme1n1\" KNAME=\"/dev/nvme1n1\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2023-06-26 14:25:46.138261 D | exec: Running command: ceph-volume inventory --format json /dev/nvme1n1 2023-06-26 14:25:46.475115 I | cephosd: device "nvme1n1" is available. 2023-06-26 14:25:46.475127 I | cephosd: "nvme1n1" found in the desired devices 2023-06-26 14:25:46.475129 I | cephosd: device "nvme1n1" is selected by the device filter/name "/dev/nvme1n1" 2023-06-26 14:25:46.480088 I | cephosd: configuring osd devices: {"Entries":{"nvme1n1":{"Data":-1,"Metadata":null,"Config":{"Name":"/dev/nvme1n1","OSDsPerDevice":4,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"nvme","InitialWeight":"","IsFilter":false,"IsDevicePathFilter":false},"PersistentDevicePaths":["/dev/ceph-disks/2-2","/dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1","/dev/disk/by-id/nvme-eui.5cd2e47811500761"],"DeviceInfo":{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/ceph-disks/2-2 /dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1 /dev/disk/by-id/nvme-eui.5cd2e47811500761","size":1024209543168,"uuid":"83fd6be2-d251-4896-83d4-371ce3941781","serial":"H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1246003M1P0B-1","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","mountpoint":"","vendor":"","model":"H20 HBRPEKNL0203A NVMe INTEL 1024GB","wwn":"eui.5cd2e47811500761","wwnVendorExtension":"","empty":false,"real-path":"/dev/nvme1n1","kernel-name":"nvme1n1"}}}} 2023-06-26 14:25:46.480109 I | cephclient: getting or creating ceph auth key "client.bootstrap-osd" 2023-06-26 14:25:46.480114 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2023-06-26 14:25:46.682470 D | cephosd: won't use raw mode for disk "/dev/nvme1n1" since osd per device is 4 2023-06-26 14:25:46.682511 I | cephosd: configuring new LVM device nvme1n1 2023-06-26 14:25:46.682514 I | cephosd: Base command - stdbuf 2023-06-26 14:25:46.682519 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 4 /dev/nvme1n1 --crush-device-class nvme] 2023-06-26 14:25:46.682522 I | cephosd: immediateReportArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 4 /dev/nvme1n1 --crush-device-class nvme --report] 2023-06-26 14:25:46.682524 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 4 /dev/nvme1n1 --crush-device-class nvme --report 2023-06-26 14:25:47.007052 D | exec: 2023-06-26 14:25:47.007087 D | exec: Total OSDs: 4 2023-06-26 14:25:47.007095 D | exec: 2023-06-26 14:25:47.007100 D | exec: Type Path LV Size % of device 2023-06-26 14:25:47.007104 D | exec: ---------------------------------------------------------------------------------------------------- 2023-06-26 14:25:47.007111 D | exec: data /dev/nvme1n1 238.47 GB 25.00% 2023-06-26 14:25:47.007114 D | exec: ---------------------------------------------------------------------------------------------------- 2023-06-26 14:25:47.007118 D | exec: data /dev/nvme1n1 238.47 GB 25.00% 2023-06-26 14:25:47.007121 D | exec: ---------------------------------------------------------------------------------------------------- 2023-06-26 14:25:47.007126 D | exec: data /dev/nvme1n1 238.47 GB 25.00% 2023-06-26 14:25:47.007130 D | exec: ---------------------------------------------------------------------------------------------------- 2023-06-26 14:25:47.007136 D | exec: data /dev/nvme1n1 238.47 GB 25.00% 2023-06-26 14:25:47.007142 D | exec: --> DEPRECATION NOTICE 2023-06-26 14:25:47.007202 D | exec: --> You are using the legacy automatic disk sorting behavior 2023-06-26 14:25:47.007208 D | exec: --> The Pacific release will change the default to --no-auto 2023-06-26 14:25:47.007212 D | exec: --> passed data devices: 1 physical, 0 LVM 2023-06-26 14:25:47.007217 D | exec: --> relative data size: 0.25 2023-06-26 14:25:47.035331 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 4 /dev/nvme1n1 --crush-device-class nvme 2023-06-26 14:25:48.385322 D | exec: --> DEPRECATION NOTICE 2023-06-26 14:25:48.385377 D | exec: --> You are using the legacy automatic disk sorting behavior 2023-06-26 14:25:48.385385 D | exec: --> The Pacific release will change the default to --no-auto 2023-06-26 14:25:48.385391 D | exec: --> passed data devices: 1 physical, 0 LVM 2023-06-26 14:25:48.385397 D | exec: --> relative data size: 0.25 2023-06-26 14:25:48.385401 D | exec: Running command: /usr/bin/ceph-authtool --gen-print-key 2023-06-26 14:25:48.385414 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3fdec961-415a-434b-ac5b-d344f9916fe9 2023-06-26 14:25:48.385421 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts vgcreate --force --yes ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 /dev/nvme1n1 2023-06-26 14:25:48.385426 D | exec: stdout: Physical volume "/dev/nvme1n1" successfully created. 2023-06-26 14:25:48.385432 D | exec: stdout: Volume group "ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900" successfully created 2023-06-26 14:25:48.385438 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts lvcreate --yes -l 61047 -n osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 2023-06-26 14:25:48.385443 D | exec: stderr: Command failed with status code 5. 2023-06-26 14:25:48.385448 D | exec: --> Was unable to complete a new OSD, will rollback changes 2023-06-26 14:25:48.385453 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.5 --yes-i-really-mean-it 2023-06-26 14:25:48.385458 D | exec: stderr: purged osd.5 2023-06-26 14:25:48.388263 D | exec: Traceback (most recent call last): 2023-06-26 14:25:48.388276 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare 2023-06-26 14:25:48.388283 D | exec: self.prepare() 2023-06-26 14:25:48.388287 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root 2023-06-26 14:25:48.388292 D | exec: return func(*a, **kw) 2023-06-26 14:25:48.388296 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare 2023-06-26 14:25:48.388300 D | exec: block_lv = self.prepare_data_device('block', osd_fsid) 2023-06-26 14:25:48.388305 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 221, in prepare_data_device 2023-06-26 14:25:48.388310 D | exec: **kwargs) 2023-06-26 14:25:48.388315 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 1006, in create_lv 2023-06-26 14:25:48.388320 D | exec: process.run(command, run_on_host=True) 2023-06-26 14:25:48.388324 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 147, in run 2023-06-26 14:25:48.388329 D | exec: raise RuntimeError(msg) 2023-06-26 14:25:48.388333 D | exec: RuntimeError: command returned non-zero exit status: 5 2023-06-26 14:25:48.388338 D | exec: 2023-06-26 14:25:48.388344 D | exec: During handling of the above exception, another exception occurred: 2023-06-26 14:25:48.388349 D | exec: 2023-06-26 14:25:48.388353 D | exec: Traceback (most recent call last): 2023-06-26 14:25:48.388357 D | exec: File "/usr/sbin/ceph-volume", line 11, in 2023-06-26 14:25:48.388362 D | exec: load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() 2023-06-26 14:25:48.388366 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__ 2023-06-26 14:25:48.388371 D | exec: self.main(self.argv) 2023-06-26 14:25:48.388381 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc 2023-06-26 14:25:48.388385 D | exec: return f(*a, **kw) 2023-06-26 14:25:48.388390 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main 2023-06-26 14:25:48.388394 D | exec: terminal.dispatch(self.mapper, subcommand_args) 2023-06-26 14:25:48.388399 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 2023-06-26 14:25:48.388404 D | exec: instance.main() 2023-06-26 14:25:48.388408 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main 2023-06-26 14:25:48.388413 D | exec: terminal.dispatch(self.mapper, self.argv) 2023-06-26 14:25:48.388418 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 2023-06-26 14:25:48.388423 D | exec: instance.main() 2023-06-26 14:25:48.388428 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root 2023-06-26 14:25:48.388433 D | exec: return func(*a, **kw) 2023-06-26 14:25:48.388438 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 441, in main 2023-06-26 14:25:48.388443 D | exec: self._execute(plan) 2023-06-26 14:25:48.388448 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 457, in _execute 2023-06-26 14:25:48.388452 D | exec: p.safe_prepare(argparse.Namespace(**args)) 2023-06-26 14:25:48.388456 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 256, in safe_prepare 2023-06-26 14:25:48.388461 D | exec: rollback_osd(self.args, self.osd_id) 2023-06-26 14:25:48.388466 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/common.py", line 35, in rollback_osd 2023-06-26 14:25:48.388470 D | exec: Zap(['--destroy', '--osd-id', osd_id]).main() 2023-06-26 14:25:48.388475 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 404, in main 2023-06-26 14:25:48.388483 D | exec: self.zap_osd() 2023-06-26 14:25:48.388488 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root 2023-06-26 14:25:48.388493 D | exec: return func(*a, **kw) 2023-06-26 14:25:48.388501 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 301, in zap_osd 2023-06-26 14:25:48.388506 D | exec: devices = find_associated_devices(self.args.osd_id, self.args.osd_fsid) 2023-06-26 14:25:48.388511 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 88, in find_associated_devices 2023-06-26 14:25:48.388516 D | exec: '%s' % osd_id or osd_fsid) 2023-06-26 14:25:48.388520 D | exec: RuntimeError: Unable to find any LV for zapping OSD: 5 2023-06-26 14:25:48.405112 E | cephosd: [2023-06-26 14:25:46,791][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 4 /dev/nvme1n1 --crush-device-class nvme --report [2023-06-26 14:25:46,794][ceph_volume.util.system][WARNING] Executable lvs not found on the host, will return lvs as-is [2023-06-26 14:25:46,794][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/nvme1n1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2023-06-26 14:25:46,870][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL --nodeps /dev/nvme1n1 [2023-06-26 14:25:46,878][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" PKNAME="" MAJ:MIN="259:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,882][ceph_volume.util.system][WARNING] Executable pvs not found on the host, will return pvs as-is [2023-06-26 14:25:46,883][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size [2023-06-26 14:25:46,962][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/nvme1n1 [2023-06-26 14:25:46,971][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="loop0" KNAME="loop0" PKNAME="" MAJ:MIN="7:0" FSTYPE="squashfs" MOUNTPOINT="/rootfs" LABEL="" UUID="" RO="1" RM="0" MODEL="" SIZE="49.2M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="loop" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="4G" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Flash Drive FIT " SIZE="239G" STATE="running" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="vfat" MOUNTPOINT="" LABEL="EFI" UUID="6494-1D42" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EFI" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BIOS" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="xfs" MOUNTPOINT="" LABEL="BOOT" UUID="d5a06eb3-df9f-449b-942f-5990f9a5a7a4" RO="0" RM="1" MODEL="" SIZE="1000M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BOOT" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="sda4" KNAME="sda4" PKNAME="sda" MAJ:MIN="8:4" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="META" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="sda5" KNAME="sda5" PKNAME="sda" MAJ:MIN="8:5" FSTYPE="xfs" MOUNTPOINT="/rootfs/system/state" LABEL="STATE" UUID="dfed63cf-c7c6-4a13-b193-c16f13fb73df" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="STATE" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="sda6" KNAME="sda6" PKNAME="sda" MAJ:MIN="8:6" FSTYPE="xfs" MOUNTPOINT="/rootfs/var" LABEL="EPHEMERAL" UUID="81cd20b9-1b29-4a73-b058-826052b71a00" RO="0" RM="1" MODEL="" SIZE="237.9G" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EPHEMERAL" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="nvme0n1" KNAME="nvme0n1" PKNAME="" MAJ:MIN="259:0" FSTYPE="ceph_bluestore" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,981][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" PKNAME="" MAJ:MIN="259:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,982][ceph_volume.util.disk][INFO ] opening device /dev/nvme1n1 to check for BlueStore label [2023-06-26 14:25:46,982][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL [2023-06-26 14:25:46,991][ceph_volume.process][INFO ] stdout NAME="loop0" KNAME="loop0" PKNAME="" MAJ:MIN="7:0" FSTYPE="squashfs" MOUNTPOINT="/rootfs" LABEL="" UUID="" RO="1" RM="0" MODEL="" SIZE="49.2M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="loop" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="4G" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,991][ceph_volume.process][INFO ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Flash Drive FIT " SIZE="239G" STATE="running" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,991][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="vfat" MOUNTPOINT="" LABEL="EFI" UUID="6494-1D42" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EFI" [2023-06-26 14:25:46,991][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BIOS" [2023-06-26 14:25:46,991][ceph_volume.process][INFO ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="xfs" MOUNTPOINT="" LABEL="BOOT" UUID="d5a06eb3-df9f-449b-942f-5990f9a5a7a4" RO="0" RM="1" MODEL="" SIZE="1000M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BOOT" [2023-06-26 14:25:46,991][ceph_volume.process][INFO ] stdout NAME="sda4" KNAME="sda4" PKNAME="sda" MAJ:MIN="8:4" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="META" [2023-06-26 14:25:46,991][ceph_volume.process][INFO ] stdout NAME="sda5" KNAME="sda5" PKNAME="sda" MAJ:MIN="8:5" FSTYPE="xfs" MOUNTPOINT="/rootfs/system/state" LABEL="STATE" UUID="dfed63cf-c7c6-4a13-b193-c16f13fb73df" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="STATE" [2023-06-26 14:25:46,992][ceph_volume.process][INFO ] stdout NAME="sda6" KNAME="sda6" PKNAME="sda" MAJ:MIN="8:6" FSTYPE="xfs" MOUNTPOINT="/rootfs/var" LABEL="EPHEMERAL" UUID="81cd20b9-1b29-4a73-b058-826052b71a00" RO="0" RM="1" MODEL="" SIZE="237.9G" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EPHEMERAL" [2023-06-26 14:25:46,992][ceph_volume.process][INFO ] stdout NAME="nvme0n1" KNAME="nvme0n1" PKNAME="" MAJ:MIN="259:0" FSTYPE="ceph_bluestore" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,992][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" PKNAME="" MAJ:MIN="259:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:46,992][ceph_volume.util.disk][INFO ] opening device /dev/nvme1n1 to check for BlueStore label [2023-06-26 14:25:46,993][ceph_volume.process][INFO ] Running command: /usr/sbin/udevadm info --query=property /dev/nvme1n1 [2023-06-26 14:25:47,004][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/nvme-eui.5cd2e47811500761 /dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1 /dev/ceph-disks/2-2 [2023-06-26 14:25:47,004][ceph_volume.process][INFO ] stdout DEVNAME=/dev/nvme1n1 [2023-06-26 14:25:47,004][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:6c:00.0/nvme/nvme1/nvme1n1 [2023-06-26 14:25:47,004][ceph_volume.process][INFO ] stdout DEVTYPE=disk [2023-06-26 14:25:47,004][ceph_volume.process][INFO ] stdout DISKSEQ=10 [2023-06-26 14:25:47,004][ceph_volume.process][INFO ] stdout ID_MODEL=H20 HBRPEKNL0203A NVMe INTEL 1024GB [2023-06-26 14:25:47,004][ceph_volume.process][INFO ] stdout ID_SERIAL=H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1246003M1P0B-1 [2023-06-26 14:25:47,005][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=PHPG1246003M1P0B-1 [2023-06-26 14:25:47,005][ceph_volume.process][INFO ] stdout ID_WWN=eui.5cd2e47811500761 [2023-06-26 14:25:47,005][ceph_volume.process][INFO ] stdout MAJOR=259 [2023-06-26 14:25:47,005][ceph_volume.process][INFO ] stdout MINOR=1 [2023-06-26 14:25:47,005][ceph_volume.process][INFO ] stdout SUBSYSTEM=block [2023-06-26 14:25:47,005][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=6551803 [2023-06-26 14:25:47,005][ceph_volume.util.disk][INFO ] opening device /dev/nvme1n1 to check for BlueStore label [2023-06-26 14:25:47,006][ceph_volume.devices.lvm.batch][WARNING] DEPRECATION NOTICE [2023-06-26 14:25:47,006][ceph_volume.devices.lvm.batch][WARNING] You are using the legacy automatic disk sorting behavior [2023-06-26 14:25:47,006][ceph_volume.devices.lvm.batch][WARNING] The Pacific release will change the default to --no-auto [2023-06-26 14:25:47,006][ceph_volume.devices.lvm.batch][DEBUG ] passed data devices: 1 physical, 0 LVM [2023-06-26 14:25:47,006][ceph_volume.devices.lvm.batch][DEBUG ] relative data size: 0.25 [2023-06-26 14:25:47,139][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 4 /dev/nvme1n1 --crush-device-class nvme [2023-06-26 14:25:47,142][ceph_volume.util.system][WARNING] Executable lvs not found on the host, will return lvs as-is [2023-06-26 14:25:47,142][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/nvme1n1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2023-06-26 14:25:47,214][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL --nodeps /dev/nvme1n1 [2023-06-26 14:25:47,224][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" PKNAME="" MAJ:MIN="259:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,230][ceph_volume.util.system][WARNING] Executable pvs not found on the host, will return pvs as-is [2023-06-26 14:25:47,231][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size [2023-06-26 14:25:47,318][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/nvme1n1 [2023-06-26 14:25:47,328][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL [2023-06-26 14:25:47,342][ceph_volume.process][INFO ] stdout NAME="loop0" KNAME="loop0" PKNAME="" MAJ:MIN="7:0" FSTYPE="squashfs" MOUNTPOINT="/rootfs" LABEL="" UUID="" RO="1" RM="0" MODEL="" SIZE="49.2M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="loop" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="4G" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,342][ceph_volume.process][INFO ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Flash Drive FIT " SIZE="239G" STATE="running" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,342][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="vfat" MOUNTPOINT="" LABEL="EFI" UUID="6494-1D42" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EFI" [2023-06-26 14:25:47,342][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BIOS" [2023-06-26 14:25:47,342][ceph_volume.process][INFO ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="xfs" MOUNTPOINT="" LABEL="BOOT" UUID="d5a06eb3-df9f-449b-942f-5990f9a5a7a4" RO="0" RM="1" MODEL="" SIZE="1000M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BOOT" [2023-06-26 14:25:47,342][ceph_volume.process][INFO ] stdout NAME="sda4" KNAME="sda4" PKNAME="sda" MAJ:MIN="8:4" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="META" [2023-06-26 14:25:47,342][ceph_volume.process][INFO ] stdout NAME="sda5" KNAME="sda5" PKNAME="sda" MAJ:MIN="8:5" FSTYPE="xfs" MOUNTPOINT="/rootfs/system/state" LABEL="STATE" UUID="dfed63cf-c7c6-4a13-b193-c16f13fb73df" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="STATE" [2023-06-26 14:25:47,343][ceph_volume.process][INFO ] stdout NAME="sda6" KNAME="sda6" PKNAME="sda" MAJ:MIN="8:6" FSTYPE="xfs" MOUNTPOINT="/rootfs/var" LABEL="EPHEMERAL" UUID="81cd20b9-1b29-4a73-b058-826052b71a00" RO="0" RM="1" MODEL="" SIZE="237.9G" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EPHEMERAL" [2023-06-26 14:25:47,343][ceph_volume.process][INFO ] stdout NAME="nvme0n1" KNAME="nvme0n1" PKNAME="" MAJ:MIN="259:0" FSTYPE="ceph_bluestore" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,343][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" PKNAME="" MAJ:MIN="259:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,343][ceph_volume.util.disk][INFO ] opening device /dev/nvme1n1 to check for BlueStore label [2023-06-26 14:25:47,343][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="loop0" KNAME="loop0" PKNAME="" MAJ:MIN="7:0" FSTYPE="squashfs" MOUNTPOINT="/rootfs" LABEL="" UUID="" RO="1" RM="0" MODEL="" SIZE="49.2M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="loop" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="4G" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Flash Drive FIT " SIZE="239G" STATE="running" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="vfat" MOUNTPOINT="" LABEL="EFI" UUID="6494-1D42" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EFI" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BIOS" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="xfs" MOUNTPOINT="" LABEL="BOOT" UUID="d5a06eb3-df9f-449b-942f-5990f9a5a7a4" RO="0" RM="1" MODEL="" SIZE="1000M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="BOOT" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="sda4" KNAME="sda4" PKNAME="sda" MAJ:MIN="8:4" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="" SIZE="1M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="META" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="sda5" KNAME="sda5" PKNAME="sda" MAJ:MIN="8:5" FSTYPE="xfs" MOUNTPOINT="/rootfs/system/state" LABEL="STATE" UUID="dfed63cf-c7c6-4a13-b193-c16f13fb73df" RO="0" RM="1" MODEL="" SIZE="100M" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="STATE" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="sda6" KNAME="sda6" PKNAME="sda" MAJ:MIN="8:6" FSTYPE="xfs" MOUNTPOINT="/rootfs/var" LABEL="EPHEMERAL" UUID="81cd20b9-1b29-4a73-b058-826052b71a00" RO="0" RM="1" MODEL="" SIZE="237.9G" STATE="" OWNER="root" GROUP="root" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="EPHEMERAL" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="nvme0n1" KNAME="nvme0n1" PKNAME="" MAJ:MIN="259:0" FSTYPE="ceph_bluestore" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" PKNAME="" MAJ:MIN="259:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,352][ceph_volume.util.disk][INFO ] opening device /dev/nvme1n1 to check for BlueStore label [2023-06-26 14:25:47,352][ceph_volume.process][INFO ] Running command: /usr/sbin/udevadm info --query=property /dev/nvme1n1 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1 /dev/ceph-disks/2-2 /dev/disk/by-id/nvme-eui.5cd2e47811500761 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout DEVNAME=/dev/nvme1n1 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:6c:00.0/nvme/nvme1/nvme1n1 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout DEVTYPE=disk [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout DISKSEQ=10 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout ID_MODEL=H20 HBRPEKNL0203A NVMe INTEL 1024GB [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout ID_SERIAL=H20 HBRPEKNL0203A NVMe INTEL 1024GB_PHPG1246003M1P0B-1 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=PHPG1246003M1P0B-1 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout ID_WWN=eui.5cd2e47811500761 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout MAJOR=259 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout MINOR=1 [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout SUBSYSTEM=block [2023-06-26 14:25:47,356][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=6551803 [2023-06-26 14:25:47,356][ceph_volume.util.disk][INFO ] opening device /dev/nvme1n1 to check for BlueStore label [2023-06-26 14:25:47,357][ceph_volume.devices.lvm.batch][WARNING] DEPRECATION NOTICE [2023-06-26 14:25:47,357][ceph_volume.devices.lvm.batch][WARNING] You are using the legacy automatic disk sorting behavior [2023-06-26 14:25:47,357][ceph_volume.devices.lvm.batch][WARNING] The Pacific release will change the default to --no-auto [2023-06-26 14:25:47,357][ceph_volume.devices.lvm.batch][DEBUG ] passed data devices: 1 physical, 0 LVM [2023-06-26 14:25:47,357][ceph_volume.devices.lvm.batch][DEBUG ] relative data size: 0.25 [2023-06-26 14:25:47,357][ceph_volume.api.lvm][WARNING] device is not part of ceph: None [2023-06-26 14:25:47,357][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key [2023-06-26 14:25:47,367][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3fdec961-415a-434b-ac5b-d344f9916fe9 [2023-06-26 14:25:47,627][ceph_volume.process][INFO ] stdout 5 [2023-06-26 14:25:47,627][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL --nodeps /dev/nvme1n1 [2023-06-26 14:25:47,632][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" PKNAME="" MAJ:MIN="259:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="H20 HBRPEKNL0203A NVMe INTEL 1024GB " SIZE="953.9G" STATE="live" OWNER="ceph" GROUP="ceph" MODE="brw-------" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-06-26 14:25:47,633][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 238.47 GB [2023-06-26 14:25:47,636][ceph_volume.util.system][WARNING] Executable pvs not found on the host, will return pvs as-is [2023-06-26 14:25:47,637][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/nvme1n1 [2023-06-26 14:25:47,714][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/nvme1n1". [2023-06-26 14:25:47,719][ceph_volume.util.system][WARNING] Executable vgcreate not found on the host, will return vgcreate as-is [2023-06-26 14:25:47,720][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts vgcreate --force --yes ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 /dev/nvme1n1 [2023-06-26 14:25:47,755][ceph_volume.process][INFO ] stdout Physical volume "/dev/nvme1n1" successfully created. [2023-06-26 14:25:47,817][ceph_volume.process][INFO ] stdout Volume group "ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900" successfully created [2023-06-26 14:25:47,822][ceph_volume.util.system][WARNING] Executable vgs not found on the host, will return vgs as-is [2023-06-26 14:25:47,823][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size [2023-06-26 14:25:47,926][ceph_volume.process][INFO ] stdout ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900";"1";"0";"wz--n-";"244190";"244190";"4194304 [2023-06-26 14:25:47,927][ceph_volume.api.lvm][DEBUG ] size was passed: 238.47 GB -> 61047 [2023-06-26 14:25:47,932][ceph_volume.util.system][WARNING] Executable lvcreate not found on the host, will return lvcreate as-is [2023-06-26 14:25:47,933][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts lvcreate --yes -l 61047 -n osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 [2023-06-26 14:25:48,009][ceph_volume.process][INFO ] stderr Command failed with status code 5. [2023-06-26 14:25:48,010][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare block_lv = self.prepare_data_device('block', osd_fsid) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 221, in prepare_data_device **kwargs) File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 1006, in create_lv process.run(command, run_on_host=True) File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 147, in run raise RuntimeError(msg) RuntimeError: command returned non-zero exit status: 5 [2023-06-26 14:25:48,011][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation [2023-06-26 14:25:48,012][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.5 --yes-i-really-mean-it [2023-06-26 14:25:48,257][ceph_volume.process][INFO ] stderr purged osd.5 [2023-06-26 14:25:48,273][ceph_volume.process][INFO ] Running command: /usr/bin/systemctl is-active ceph-osd@5 [2023-06-26 14:25:48,278][ceph_volume.process][INFO ] stderr System has not been booted with systemd as init system (PID 1). Can't operate. [2023-06-26 14:25:48,278][ceph_volume.process][INFO ] stderr Failed to connect to bus: Host is down [2023-06-26 14:25:48,282][ceph_volume.util.system][WARNING] Executable lvs not found on the host, will return lvs as-is [2023-06-26 14:25:48,282][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S tags={ceph.osd_id=5} -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2023-06-26 14:25:48,382][ceph_volume][ERROR ] exception caught by decorator Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare block_lv = self.prepare_data_device('block', osd_fsid) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 221, in prepare_data_device **kwargs) File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 1006, in create_lv process.run(command, run_on_host=True) File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 147, in run raise RuntimeError(msg) RuntimeError: command returned non-zero exit status: 5 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 441, in main self._execute(plan) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 457, in _execute p.safe_prepare(argparse.Namespace(**args)) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 256, in safe_prepare rollback_osd(self.args, self.osd_id) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/common.py", line 35, in rollback_osd Zap(['--destroy', '--osd-id', osd_id]).main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 404, in main self.zap_osd() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 301, in zap_osd devices = find_associated_devices(self.args.osd_id, self.args.osd_fsid) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 88, in find_associated_devices '%s' % osd_id or osd_fsid) RuntimeError: Unable to find any LV for zapping OSD: 5 2023-06-26 14:25:48.411495 C | rookcmd: failed to configure devices: failed to initialize osd: failed ceph-volume: exit status 1 ```

The core error appears to happen on these statements.

2023-06-26 14:25:48.385421 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts vgcreate --force --yes ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 /dev/nvme1n1
2023-06-26 14:25:48.385426 D | exec:  stdout: Physical volume "/dev/nvme1n1" successfully created.
2023-06-26 14:25:48.385432 D | exec:  stdout: Volume group "ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900" successfully created
2023-06-26 14:25:48.385438 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts lvcreate --yes -l 61047 -n osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900
2023-06-26 14:25:48.385443 D | exec:  stderr: Command failed with status code 5.

I tried running the lvcreate command manually with extra logging (-vvvv) and this is what I got.

lvcreate logs ~~~ $ nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts lvcreate --yes -l 61047 -vvvv -n osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 14:50:39.360481 lvcreate[140414] lvmcmdline.c:3160 Version: 2.03.20(2) (2023-03-21) 14:50:39.360501 lvcreate[140414] lvmcmdline.c:3161 Parsing: lvcreate --yes -l 61047 -vvvv -n osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 14:50:39.360510 lvcreate[140414] lvmcmdline.c:2028 Recognised command lvcreate_linear (id 53 / enum 56). 14:50:39.360520 lvcreate[140414] filters/filter-type.c:61 LVM type filter initialised. 14:50:39.360524 lvcreate[140414] filters/filter-deviceid.c:66 deviceid filter initialised. 14:50:39.360529 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/sysfs_scan not found in config: defaulting to 1 14:50:39.360540 lvcreate[140414] filters/filter-sysfs.c:106 Sysfs filter initialised. 14:50:39.360545 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/scan_lvs not found in config: defaulting to 0 14:50:39.360549 lvcreate[140414] filters/filter-usable.c:144 Usable device filter initialised (scan_lvs 0). 14:50:39.360554 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/multipath_component_detection not found in config: defaulting to 1 14:50:39.360558 lvcreate[140414] filters/filter-mpath.c:87 mpath filter initialised. 14:50:39.360562 lvcreate[140414] filters/filter-partitioned.c:68 Partitioned filter initialised. 14:50:39.360570 lvcreate[140414] filters/filter-signature.c:88 signature filter initialised. 14:50:39.360576 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/md_component_detection not found in config: defaulting to 1 14:50:39.360581 lvcreate[140414] filters/filter-md.c:149 MD filter initialised. 14:50:39.360585 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/fw_raid_component_detection not found in config: defaulting to 0 14:50:39.360589 lvcreate[140414] filters/filter-composite.c:98 Composite filter initialised. 14:50:39.360595 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/ignore_suspended_devices not found in config: defaulting to 0 14:50:39.360602 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/ignore_lvm_mirrors not found in config: defaulting to 1 14:50:39.360608 lvcreate[140414] filters/filter-persistent.c:187 Persistent filter initialised. 14:50:39.360614 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/scan_lvs not found in config: defaulting to 0 14:50:39.360619 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/allow_mixed_block_sizes not found in config: defaulting to 0 14:50:39.360624 lvcreate[140414] device_mapper/libdm-config.c:986 devices/hints not found in config: defaulting to "all" 14:50:39.360630 lvcreate[140414] device_mapper/libdm-config.c:986 activation/activation_mode not found in config: defaulting to "degraded" 14:50:39.360636 lvcreate[140414] device_mapper/libdm-config.c:1085 metadata/record_lvs_history not found in config: defaulting to 0 14:50:39.360641 lvcreate[140414] device_mapper/libdm-config.c:986 devices/search_for_devnames not found in config: defaulting to "auto" 14:50:39.360649 lvcreate[140414] lvmcmdline.c:3235 DEGRADED MODE. Incomplete RAID LVs will be processed. 14:50:39.360657 lvcreate[140414] device_mapper/libdm-config.c:1085 activation/monitoring not found in config: defaulting to 1 14:50:39.360661 lvcreate[140414] lvmcmdline.c:3241 Processing command: lvcreate --yes -l 61047 -vvvv -n osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 14:50:39.360667 lvcreate[140414] lvmcmdline.c:3242 Command pid: 140414 14:50:39.360671 lvcreate[140414] lvmcmdline.c:3243 System ID: 14:50:39.360676 lvcreate[140414] lvmcmdline.c:3246 O_DIRECT will be used 14:50:39.360681 lvcreate[140414] device_mapper/libdm-config.c:1013 global/locking_type not found in config: defaulting to 1 14:50:39.360687 lvcreate[140414] device_mapper/libdm-config.c:1085 global/wait_for_locks not found in config: defaulting to 1 14:50:39.360692 lvcreate[140414] locking/locking.c:141 File locking settings: readonly:0 sysinit:0 ignorelockingfailure:0 global/metadata_read_only:0 global/wait_for_locks:1. 14:50:39.360702 lvcreate[140414] device_mapper/libdm-config.c:1085 global/prioritise_write_locks not found in config: defaulting to 1 14:50:39.360707 lvcreate[140414] device_mapper/libdm-config.c:986 global/locking_dir not found in config: defaulting to "/var/lock/lvm" 14:50:39.360716 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/md_component_detection not found in config: defaulting to 1 14:50:39.360721 lvcreate[140414] device_mapper/libdm-config.c:986 devices/md_component_checks not found in config: defaulting to "auto" 14:50:39.360725 lvcreate[140414] lvmcmdline.c:3062 Using md_component_checks auto use_full_md_check 0 14:50:39.360730 lvcreate[140414] device_mapper/libdm-config.c:986 devices/multipath_wwids_file not found in config: defaulting to "/etc/multipath/wwids" 14:50:39.360736 lvcreate[140414] device/dev-mpath.c:217 multipath wwids file not found 14:50:39.360743 lvcreate[140414] device_mapper/libdm-config.c:1085 global/use_lvmlockd not found in config: defaulting to 0 14:50:39.360749 lvcreate[140414] activate/activate.c:517 Getting target version for linear 14:50:39.360812 lvcreate[140414] device_mapper/ioctl/libdm-iface.c:2097 dm version [ opencount flush ] [2048] (*1) 14:50:39.360822 lvcreate[140414] device_mapper/ioctl/libdm-iface.c:2097 dm versions [ opencount flush ] [2048] (*1) 14:50:39.360832 lvcreate[140414] activate/activate.c:552 Found linear target v1.4.0. 14:50:39.360836 lvcreate[140414] activate/activate.c:517 Getting target version for striped 14:50:39.360840 lvcreate[140414] device_mapper/ioctl/libdm-iface.c:2097 dm versions [ opencount flush ] [2048] (*1) 14:50:39.360847 lvcreate[140414] activate/activate.c:552 Found striped target v1.6.0. 14:50:39.360852 lvcreate[140414] device_mapper/libdm-config.c:1085 allocation/wipe_signatures_when_zeroing_new_lvs not found in config: defaulting to 1 14:50:39.360861 lvcreate[140414] device_mapper/libdm-config.c:1013 activation/mirror_region_size not found in config: defaulting to 2048 14:50:39.360865 lvcreate[140414] device_mapper/libdm-config.c:1013 activation/raid_region_size not found in config: defaulting to 2048 14:50:39.360873 lvcreate[140414] device_mapper/libdm-config.c:986 report/output_format not found in config: defaulting to "basic" 14:50:39.360879 lvcreate[140414] device_mapper/libdm-config.c:1085 log/report_command_log not found in config: defaulting to 0 14:50:39.360883 lvcreate[140414] toollib.c:2434 Processing each VG 14:50:39.360887 lvcreate[140414] cache/lvmcache.c:1599 lvmcache label scan begin 14:50:39.360891 lvcreate[140414] label/label.c:1263 Finding devices to scan 14:50:39.360928 lvcreate[140414] device_mapper/libdm-config.c:1085 devices/use_devicesfile not found in config: defaulting to 0 14:50:39.360933 lvcreate[140414] device/dev-cache.c:1195 Creating list of system devices. 14:50:39.361043 lvcreate[140414] device/dev-cache.c:753 Found dev 259:0 /dev/block/259:0 - new. 14:50:39.361053 lvcreate[140414] device/dev-cache.c:753 Found dev 259:1 /dev/block/259:1 - new. 14:50:39.361060 lvcreate[140414] device/dev-cache.c:753 Found dev 7:0 /dev/block/7:0 - new. 14:50:39.361066 lvcreate[140414] device/dev-cache.c:753 Found dev 7:1 /dev/block/7:1 - new. 14:50:39.361072 lvcreate[140414] device/dev-cache.c:753 Found dev 7:2 /dev/block/7:2 - new. 14:50:39.361079 lvcreate[140414] device/dev-cache.c:753 Found dev 7:3 /dev/block/7:3 - new. 14:50:39.361085 lvcreate[140414] device/dev-cache.c:753 Found dev 7:4 /dev/block/7:4 - new. 14:50:39.361091 lvcreate[140414] device/dev-cache.c:753 Found dev 7:5 /dev/block/7:5 - new. 14:50:39.361097 lvcreate[140414] device/dev-cache.c:753 Found dev 7:6 /dev/block/7:6 - new. 14:50:39.361102 lvcreate[140414] device/dev-cache.c:753 Found dev 7:7 /dev/block/7:7 - new. 14:50:39.361109 lvcreate[140414] device/dev-cache.c:753 Found dev 8:0 /dev/block/8:0 - new. 14:50:39.361115 lvcreate[140414] device/dev-cache.c:753 Found dev 8:1 /dev/block/8:1 - new. 14:50:39.361122 lvcreate[140414] device/dev-cache.c:753 Found dev 8:2 /dev/block/8:2 - new. 14:50:39.361128 lvcreate[140414] device/dev-cache.c:753 Found dev 8:3 /dev/block/8:3 - new. 14:50:39.361133 lvcreate[140414] device/dev-cache.c:753 Found dev 8:4 /dev/block/8:4 - new. 14:50:39.361139 lvcreate[140414] device/dev-cache.c:753 Found dev 8:5 /dev/block/8:5 - new. 14:50:39.361144 lvcreate[140414] device/dev-cache.c:753 Found dev 8:6 /dev/block/8:6 - new. 14:50:39.361193 lvcreate[140414] device/dev-cache.c:778 Found dev 259:0 /dev/ceph-disks/2-1 - new alias. 14:50:39.361200 lvcreate[140414] device/dev-cache.c:778 Found dev 259:1 /dev/ceph-disks/2-2 - new alias. 14:50:39.361637 lvcreate[140414] device/dev-cache.c:778 Found dev 259:1 /dev/disk/by-id/lvm-pv-uuid-3Zd6yp-BKf8-IHC2-yiNj-1uUJ-6gIY-Jm2kbm - new alias. 14:50:39.361645 lvcreate[140414] device/dev-cache.c:778 Found dev 259:1 /dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1246003M1P0B-1 - new alias. 14:50:39.361653 lvcreate[140414] device/dev-cache.c:778 Found dev 259:0 /dev/disk/by-id/nvme-H20_HBRPEKNL0203A_NVMe_INTEL_1024GB_PHPG1401005Y1P0B-1 - new alias. 14:50:39.361659 lvcreate[140414] device/dev-cache.c:778 Found dev 259:0 /dev/disk/by-id/nvme-eui.5cd2e4732150000e - new alias. 14:50:39.361665 lvcreate[140414] device/dev-cache.c:778 Found dev 259:1 /dev/disk/by-id/nvme-eui.5cd2e47811500761 - new alias. 14:50:39.361672 lvcreate[140414] device/dev-cache.c:778 Found dev 8:0 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0 - new alias. 14:50:39.361678 lvcreate[140414] device/dev-cache.c:778 Found dev 8:1 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part1 - new alias. 14:50:39.361686 lvcreate[140414] device/dev-cache.c:778 Found dev 8:2 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part2 - new alias. 14:50:39.361692 lvcreate[140414] device/dev-cache.c:778 Found dev 8:3 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part3 - new alias. 14:50:39.361698 lvcreate[140414] device/dev-cache.c:778 Found dev 8:4 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part4 - new alias. 14:50:39.361704 lvcreate[140414] device/dev-cache.c:778 Found dev 8:5 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part5 - new alias. 14:50:39.361711 lvcreate[140414] device/dev-cache.c:778 Found dev 8:6 /dev/disk/by-id/usb-Samsung_Flash_Drive_FIT_0346222100000240-0:0-part6 - new alias. 14:50:39.361724 lvcreate[140414] device/dev-cache.c:778 Found dev 8:3 /dev/disk/by-label/BOOT - new alias. 14:50:39.361733 lvcreate[140414] device/dev-cache.c:778 Found dev 8:1 /dev/disk/by-label/EFI - new alias. 14:50:39.361741 lvcreate[140414] device/dev-cache.c:778 Found dev 8:6 /dev/disk/by-label/EPHEMERAL - new alias. 14:50:39.361749 lvcreate[140414] device/dev-cache.c:778 Found dev 8:5 /dev/disk/by-label/STATE - new alias. 14:50:39.361765 lvcreate[140414] device/dev-cache.c:778 Found dev 8:2 /dev/disk/by-partlabel/BIOS - new alias. 14:50:39.361773 lvcreate[140414] device/dev-cache.c:778 Found dev 8:3 /dev/disk/by-partlabel/BOOT - new alias. 14:50:39.361781 lvcreate[140414] device/dev-cache.c:778 Found dev 8:1 /dev/disk/by-partlabel/EFI - new alias. 14:50:39.361789 lvcreate[140414] device/dev-cache.c:778 Found dev 8:6 /dev/disk/by-partlabel/EPHEMERAL - new alias. 14:50:39.361797 lvcreate[140414] device/dev-cache.c:778 Found dev 8:4 /dev/disk/by-partlabel/META - new alias. 14:50:39.361805 lvcreate[140414] device/dev-cache.c:778 Found dev 8:5 /dev/disk/by-partlabel/STATE - new alias. 14:50:39.361824 lvcreate[140414] device/dev-cache.c:778 Found dev 8:5 /dev/disk/by-partuuid/37e1e641-de57-4a0a-8240-268bda976aab - new alias. 14:50:39.361832 lvcreate[140414] device/dev-cache.c:778 Found dev 8:2 /dev/disk/by-partuuid/af96884d-7668-46a0-83d7-6add9267db4a - new alias. 14:50:39.361840 lvcreate[140414] device/dev-cache.c:778 Found dev 8:6 /dev/disk/by-partuuid/b18b93ad-b4a8-4a18-9fbd-131e1b7c5f8f - new alias. 14:50:39.361849 lvcreate[140414] device/dev-cache.c:778 Found dev 8:3 /dev/disk/by-partuuid/bc37d3a4-5029-45b0-9af1-719c8ac5e1c7 - new alias. 14:50:39.361857 lvcreate[140414] device/dev-cache.c:778 Found dev 8:1 /dev/disk/by-partuuid/c9c7ec40-6550-4389-9592-664116f5a27a - new alias. 14:50:39.361866 lvcreate[140414] device/dev-cache.c:778 Found dev 8:4 /dev/disk/by-partuuid/d16d8c5e-e552-4a79-9adc-e69b64299881 - new alias. 14:50:39.361886 lvcreate[140414] device/dev-cache.c:778 Found dev 8:0 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0 - new alias. 14:50:39.361894 lvcreate[140414] device/dev-cache.c:778 Found dev 8:1 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part1 - new alias. 14:50:39.361903 lvcreate[140414] device/dev-cache.c:778 Found dev 8:2 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part2 - new alias. 14:50:39.361911 lvcreate[140414] device/dev-cache.c:778 Found dev 8:3 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part3 - new alias. 14:50:39.361920 lvcreate[140414] device/dev-cache.c:778 Found dev 8:4 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part4 - new alias. 14:50:39.361927 lvcreate[140414] device/dev-cache.c:778 Found dev 8:5 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part5 - new alias. 14:50:39.361935 lvcreate[140414] device/dev-cache.c:778 Found dev 8:6 /dev/disk/by-path/pci-0000:00:14.0-usb-0:8:1.0-scsi-0:0:0:0-part6 - new alias. 14:50:39.361950 lvcreate[140414] device/dev-cache.c:778 Found dev 8:1 /dev/disk/by-uuid/6494-1D42 - new alias. 14:50:39.361959 lvcreate[140414] device/dev-cache.c:778 Found dev 8:6 /dev/disk/by-uuid/81cd20b9-1b29-4a73-b058-826052b71a00 - new alias. 14:50:39.361967 lvcreate[140414] device/dev-cache.c:778 Found dev 8:3 /dev/disk/by-uuid/d5a06eb3-df9f-449b-942f-5990f9a5a7a4 - new alias. 14:50:39.361977 lvcreate[140414] device/dev-cache.c:778 Found dev 8:5 /dev/disk/by-uuid/dfed63cf-c7c6-4a13-b193-c16f13fb73df - new alias. 14:50:39.362000 lvcreate[140414] device/dev-cache.c:1160 /dev/fd: Symbolic link to directory 14:50:39.362007 lvcreate[140414] device/dev-cache.c:1165 /dev/hugepages: Different filesystem in directory 14:50:39.362022 lvcreate[140414] device/dev-cache.c:778 Found dev 7:0 /dev/loop0 - new alias. 14:50:39.362028 lvcreate[140414] device/dev-cache.c:778 Found dev 7:1 /dev/loop1 - new alias. 14:50:39.362033 lvcreate[140414] device/dev-cache.c:778 Found dev 7:2 /dev/loop2 - new alias. 14:50:39.362039 lvcreate[140414] device/dev-cache.c:778 Found dev 7:3 /dev/loop3 - new alias. 14:50:39.362044 lvcreate[140414] device/dev-cache.c:778 Found dev 7:4 /dev/loop4 - new alias. 14:50:39.362051 lvcreate[140414] device/dev-cache.c:778 Found dev 7:5 /dev/loop5 - new alias. 14:50:39.362055 lvcreate[140414] device/dev-cache.c:778 Found dev 7:6 /dev/loop6 - new alias. 14:50:39.362061 lvcreate[140414] device/dev-cache.c:778 Found dev 7:7 /dev/loop7 - new alias. 14:50:39.362077 lvcreate[140414] device/dev-cache.c:778 Found dev 259:0 /dev/nvme0n1 - new alias. 14:50:39.362081 lvcreate[140414] device/dev-cache.c:362 Found nvme device /dev/ceph-disks/2-1 14:50:39.362086 lvcreate[140414] device/dev-cache.c:778 Found dev 259:1 /dev/nvme1n1 - new alias. 14:50:39.362090 lvcreate[140414] device/dev-cache.c:362 Found nvme device /dev/ceph-disks/2-2 14:50:39.362097 lvcreate[140414] device/dev-cache.c:1165 /dev/pts: Different filesystem in directory 14:50:39.362103 lvcreate[140414] device/dev-cache.c:778 Found dev 8:0 /dev/sda - new alias. 14:50:39.362110 lvcreate[140414] device/dev-cache.c:778 Found dev 8:1 /dev/sda1 - new alias. 14:50:39.362116 lvcreate[140414] device/dev-cache.c:778 Found dev 8:2 /dev/sda2 - new alias. 14:50:39.362122 lvcreate[140414] device/dev-cache.c:778 Found dev 8:3 /dev/sda3 - new alias. 14:50:39.362127 lvcreate[140414] device/dev-cache.c:778 Found dev 8:4 /dev/sda4 - new alias. 14:50:39.362133 lvcreate[140414] device/dev-cache.c:778 Found dev 8:5 /dev/sda5 - new alias. 14:50:39.362138 lvcreate[140414] device/dev-cache.c:778 Found dev 8:6 /dev/sda6 - new alias. 14:50:39.362145 lvcreate[140414] device/dev-cache.c:1165 /dev/shm: Different filesystem in directory 14:50:39.362204 lvcreate[140414] label/label.c:1339 Filtering devices to scan (nodata) 14:50:39.362217 lvcreate[140414] device/dev-io.c:120 /dev/nvme0n1: size is 2000409264 sectors 14:50:39.362224 lvcreate[140414] device/dev-io.c:466 Closed /dev/nvme0n1 14:50:39.362243 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/nvme0n1 14:50:39.362252 lvcreate[140414] device/dev-io.c:120 /dev/loop0: size is 100680 sectors 14:50:39.362258 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop0 14:50:39.362263 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/loop0 14:50:39.362360 lvcreate[140414] device/dev-io.c:120 /dev/sda: size is 501253132 sectors 14:50:39.362369 lvcreate[140414] device/dev-io.c:466 Closed /dev/sda 14:50:39.362385 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda 14:50:39.362394 lvcreate[140414] device/dev-io.c:120 /dev/nvme1n1: size is 2000409264 sectors 14:50:39.362399 lvcreate[140414] device/dev-io.c:466 Closed /dev/nvme1n1 14:50:39.362414 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/nvme1n1 14:50:39.362422 lvcreate[140414] device/dev-io.c:120 /dev/loop1: size is 0 sectors 14:50:39.362427 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop1 14:50:39.362432 lvcreate[140414] filters/filter-usable.c:39 /dev/loop1: Skipping: Too small to hold a PV 14:50:39.362525 lvcreate[140414] device/dev-io.c:120 /dev/sda1: size is 204800 sectors 14:50:39.362531 lvcreate[140414] device/dev-io.c:466 Closed /dev/sda1 14:50:39.362543 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda1 14:50:39.362552 lvcreate[140414] device/dev-io.c:120 /dev/loop2: size is 0 sectors 14:50:39.362558 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop2 14:50:39.362562 lvcreate[140414] filters/filter-usable.c:39 /dev/loop2: Skipping: Too small to hold a PV 14:50:39.362687 lvcreate[140414] device/dev-io.c:120 /dev/sda2: size is 2048 sectors 14:50:39.362700 lvcreate[140414] device/dev-io.c:466 Closed /dev/sda2 14:50:39.362706 lvcreate[140414] filters/filter-usable.c:39 /dev/sda2: Skipping: Too small to hold a PV 14:50:39.362746 lvcreate[140414] device/dev-io.c:120 /dev/loop3: size is 0 sectors 14:50:39.362757 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop3 14:50:39.362762 lvcreate[140414] filters/filter-usable.c:39 /dev/loop3: Skipping: Too small to hold a PV 14:50:39.362872 lvcreate[140414] device/dev-io.c:120 /dev/sda3: size is 2048000 sectors 14:50:39.362881 lvcreate[140414] device/dev-io.c:466 Closed /dev/sda3 14:50:39.362900 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda3 14:50:39.362909 lvcreate[140414] device/dev-io.c:120 /dev/loop4: size is 0 sectors 14:50:39.362915 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop4 14:50:39.362919 lvcreate[140414] filters/filter-usable.c:39 /dev/loop4: Skipping: Too small to hold a PV 14:50:39.363030 lvcreate[140414] device/dev-io.c:120 /dev/sda4: size is 2048 sectors 14:50:39.363037 lvcreate[140414] device/dev-io.c:466 Closed /dev/sda4 14:50:39.363042 lvcreate[140414] filters/filter-usable.c:39 /dev/sda4: Skipping: Too small to hold a PV 14:50:39.363051 lvcreate[140414] device/dev-io.c:120 /dev/loop5: size is 0 sectors 14:50:39.363056 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop5 14:50:39.363060 lvcreate[140414] filters/filter-usable.c:39 /dev/loop5: Skipping: Too small to hold a PV 14:50:39.363235 lvcreate[140414] device/dev-io.c:120 /dev/sda5: size is 204800 sectors 14:50:39.363242 lvcreate[140414] device/dev-io.c:466 Closed /dev/sda5 14:50:39.363254 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda5 14:50:39.363263 lvcreate[140414] device/dev-io.c:120 /dev/loop6: size is 0 sectors 14:50:39.363268 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop6 14:50:39.363273 lvcreate[140414] filters/filter-usable.c:39 /dev/loop6: Skipping: Too small to hold a PV 14:50:39.363539 lvcreate[140414] device/dev-io.c:120 /dev/sda6: size is 498788352 sectors 14:50:39.363547 lvcreate[140414] device/dev-io.c:466 Closed /dev/sda6 14:50:39.363558 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda6 14:50:39.363567 lvcreate[140414] device/dev-io.c:120 /dev/loop7: size is 0 sectors 14:50:39.363573 lvcreate[140414] device/dev-io.c:466 Closed /dev/loop7 14:50:39.363578 lvcreate[140414] filters/filter-usable.c:39 /dev/loop7: Skipping: Too small to hold a PV 14:50:39.363583 lvcreate[140414] label/label.c:1356 Filtering devices to scan done (nodata) 14:50:39.363605 lvcreate[140414] label/hints.c:1374 get_hints: no file 14:50:39.363613 lvcreate[140414] label/hints.c:258 touch_hints errno 2 /var/run/lvm/hints 14:50:39.363619 lvcreate[140414] label/label.c:836 Checking fd limit for num_devs 8 want 40 soft 1048576 hard 1048576 14:50:39.363624 lvcreate[140414] label/label.c:641 Scanning 8 devices for VG info 14:50:39.363631 lvcreate[140414] label/label.c:569 open /dev/nvme0n1 ro di 0 fd 4 14:50:39.363669 lvcreate[140414] label/label.c:569 open /dev/loop0 ro di 1 fd 5 14:50:39.363766 lvcreate[140414] label/label.c:569 open /dev/sda ro di 2 fd 6 14:50:39.363797 lvcreate[140414] label/label.c:569 open /dev/nvme1n1 ro di 3 fd 7 14:50:39.365531 lvcreate[140414] label/label.c:569 open /dev/sda1 ro di 4 fd 8 14:50:39.367325 lvcreate[140414] label/label.c:569 open /dev/sda3 ro di 5 fd 9 14:50:39.367418 lvcreate[140414] label/label.c:569 open /dev/sda5 ro di 6 fd 10 14:50:39.367477 lvcreate[140414] label/label.c:569 open /dev/sda6 ro di 7 fd 11 14:50:39.369132 lvcreate[140414] label/label.c:680 Scanning submitted 8 reads 14:50:39.369166 lvcreate[140414] label/label.c:714 Processing data from device /dev/nvme0n1 259:0 di 0 14:50:39.369191 lvcreate[140414] device/dev-io.c:96 /dev/nvme0n1: using cached size 2000409264 sectors 14:50:39.369251 lvcreate[140414] device/dev-io.c:96 /dev/nvme0n1: using cached size 2000409264 sectors 14:50:39.369269 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/nvme0n1 14:50:39.369289 lvcreate[140414] label/label.c:399 /dev/nvme0n1: No lvm label detected 14:50:39.374315 lvcreate[140414] label/label.c:714 Processing data from device /dev/loop0 7:0 di 1 14:50:39.374350 lvcreate[140414] device/dev-io.c:96 /dev/loop0: using cached size 100680 sectors 14:50:39.374416 lvcreate[140414] device/dev-io.c:96 /dev/loop0: using cached size 100680 sectors 14:50:39.374439 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/loop0 14:50:39.374463 lvcreate[140414] label/label.c:399 /dev/loop0: No lvm label detected 14:50:39.374487 lvcreate[140414] label/label.c:714 Processing data from device /dev/sda 8:0 di 2 14:50:39.374518 lvcreate[140414] device/dev-io.c:96 /dev/sda: using cached size 501253132 sectors 14:50:39.374573 lvcreate[140414] filters/filter-partitioned.c:35 /dev/sda: Skipping: Partition table signature found 14:50:39.374596 lvcreate[140414] label/label.c:382 14:50:39.374631 lvcreate[140414] label/label.c:714 Processing data from device /dev/nvme1n1 259:1 di 3 14:50:39.374668 lvcreate[140414] device/dev-io.c:96 /dev/nvme1n1: using cached size 2000409264 sectors 14:50:39.374735 lvcreate[140414] device/dev-io.c:96 /dev/nvme1n1: using cached size 2000409264 sectors 14:50:39.374758 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/nvme1n1 14:50:39.374786 lvcreate[140414] label/label.c:312 Found label at sector 1 on /dev/nvme1n1 14:50:39.374812 lvcreate[140414] cache/lvmcache.c:2477 Found PVID 3Zd6ypBKf8IHC2yiNj1uUJ6gIYJm2kbm on /dev/nvme1n1 14:50:39.374840 lvcreate[140414] cache/lvmcache.c:2000 lvmcache /dev/nvme1n1: now in VG #orphans_lvm2 #orphans_lvm2 14:50:39.374867 lvcreate[140414] format_text/text_label.c:538 Scanning /dev/nvme1n1 mda1 summary. 14:50:39.374891 lvcreate[140414] format_text/format-text.c:196 Reading mda header sector from /dev/nvme1n1 at 4096 14:50:39.374948 lvcreate[140414] format_text/import.c:57 Reading metadata summary from /dev/nvme1n1 at 4608 size 830 (+0) 14:50:39.375060 lvcreate[140414] format_text/format-text.c:1575 Found metadata summary on /dev/nvme1n1 at 4608 size 830 for VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 14:50:39.375088 lvcreate[140414] cache/lvmcache.c:1919 lvmcache adding vginfo for ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 gtzkxx-GwxA-CexX-8Jgg-LCcx-xHsg-SBEuJ2 14:50:39.375116 lvcreate[140414] cache/lvmcache.c:2000 lvmcache /dev/nvme1n1: now in VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 gtzkxxGwxACexX8JggLCcxxHsgSBEuJ2 14:50:39.375140 lvcreate[140414] cache/lvmcache.c:1840 lvmcache /dev/nvme1n1: VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900: set VGID to gtzkxxGwxACexX8JggLCcxxHsgSBEuJ2. 14:50:39.375170 lvcreate[140414] cache/lvmcache.c:2196 lvmcache /dev/nvme1n1 mda1 VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 set seqno 1 checksum 5381693a mda_size 830 14:50:39.375201 lvcreate[140414] cache/lvmcache.c:2034 lvmcache /dev/nvme1n1: VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900: set creation host to nxl03. 14:50:39.375226 lvcreate[140414] format_text/text_label.c:566 Found metadata seqno 1 in mda1 on /dev/nvme1n1 14:50:39.375250 lvcreate[140414] label/label.c:714 Processing data from device /dev/sda1 8:1 di 4 14:50:39.375281 lvcreate[140414] device/dev-io.c:96 /dev/sda1: using cached size 204800 sectors 14:50:39.375337 lvcreate[140414] device/dev-io.c:96 /dev/sda1: using cached size 204800 sectors 14:50:39.375359 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda1 14:50:39.375383 lvcreate[140414] label/label.c:399 /dev/sda1: No lvm label detected 14:50:39.375434 lvcreate[140414] label/label.c:714 Processing data from device /dev/sda3 8:3 di 5 14:50:39.375465 lvcreate[140414] device/dev-io.c:96 /dev/sda3: using cached size 2048000 sectors 14:50:39.375533 lvcreate[140414] device/dev-io.c:96 /dev/sda3: using cached size 2048000 sectors 14:50:39.375557 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda3 14:50:39.375596 lvcreate[140414] label/label.c:399 /dev/sda3: No lvm label detected 14:50:39.375633 lvcreate[140414] label/label.c:714 Processing data from device /dev/sda5 8:5 di 6 14:50:39.375663 lvcreate[140414] device/dev-io.c:96 /dev/sda5: using cached size 204800 sectors 14:50:39.375732 lvcreate[140414] device/dev-io.c:96 /dev/sda5: using cached size 204800 sectors 14:50:39.375753 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda5 14:50:39.375780 lvcreate[140414] label/label.c:399 /dev/sda5: No lvm label detected 14:50:39.375815 lvcreate[140414] label/label.c:714 Processing data from device /dev/sda6 8:6 di 7 14:50:39.375845 lvcreate[140414] device/dev-io.c:96 /dev/sda6: using cached size 498788352 sectors 14:50:39.375896 lvcreate[140414] device/dev-io.c:96 /dev/sda6: using cached size 498788352 sectors 14:50:39.375931 lvcreate[140414] filters/filter-persistent.c:131 filter caching good /dev/sda6 14:50:39.375953 lvcreate[140414] label/label.c:399 /dev/sda6: No lvm label detected 14:50:39.375986 lvcreate[140414] label/label.c:749 Scanned devices: read errors 0 process errors 0 failed 0 14:50:39.376016 lvcreate[140414] cache/lvmcache.c:1689 lvmcache label scan done 14:50:39.376036 lvcreate[140414] toollib.c:2488 Obtaining the complete list of VGs to process 14:50:39.376058 lvcreate[140414] toollib.c:2197 Processing VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 gtzkxx-GwxA-CexX-8Jgg-LCcx-xHsg-SBEuJ2 14:50:39.376099 lvcreate[140414] misc/lvm-flock.c:229 Locking /var/lock/lvm/V_ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 WB 14:50:39.376124 lvcreate[140414] misc/lvm-flock.c:113 _do_flock /var/lock/lvm/V_ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900:aux WB 14:50:39.376244 lvcreate[140414] misc/lvm-flock.c:113 _do_flock /var/lock/lvm/V_ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 WB 14:50:39.376310 lvcreate[140414] misc/lvm-flock.c:47 _undo_flock /var/lock/lvm/V_ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900:aux 14:50:39.376369 lvcreate[140414] metadata/metadata.c:4645 Reading VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 gtzkxxGwxACexX8JggLCcxxHsgSBEuJ2 14:50:39.376406 lvcreate[140414] label/label.c:1898 reopen writable /dev/nvme1n1 di 3 prev 7 fd 4 14:50:39.376434 lvcreate[140414] format_text/format-text.c:196 Reading mda header sector from /dev/nvme1n1 at 4096 14:50:39.376774 lvcreate[140414] metadata/metadata.c:4616 Rescan skipped - unchanged offset 512 checksum 5381693a. 14:50:39.376821 lvcreate[140414] metadata/metadata.c:4795 Reading VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 metadata from /dev/nvme1n1 4096 14:50:39.376847 lvcreate[140414] format_text/format-text.c:196 Reading mda header sector from /dev/nvme1n1 at 4096 14:50:39.376895 lvcreate[140414] format_text/import.c:153 Reading metadata from /dev/nvme1n1 at 4608 size 830 (+0) 14:50:39.376961 lvcreate[140414] metadata/vg.c:65 Allocated VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 at 0x7f1d6cff4c70. 14:50:39.377086 lvcreate[140414] format_text/format-text.c:442 Found metadata text at 4608 off 512 size 830 VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 on /dev/nvme1n1 14:50:39.377120 lvcreate[140414] cache/lvmcache.c:2340 lvmcache_update_vg ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 for info /dev/nvme1n1 14:50:39.377154 lvcreate[140414] device_mapper/libdm-config.c:1013 metadata/lvs_history_retention_time not found in config: defaulting to 0 14:50:39.377180 lvcreate[140414] metadata/pv_manip.c:413 /dev/nvme1n1 0: 0 244190: NULL(0:0) 14:50:39.377210 lvcreate[140414] device/dev-io.c:96 /dev/nvme1n1: using cached size 2000409264 sectors 14:50:39.377237 lvcreate[140414] metadata/vg.c:65 Allocated VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 at 0x7f1d6cff95a0. 14:50:39.377330 lvcreate[140414] toollib.c:2227 Running command for VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 gtzkxx-GwxA-CexX-8Jgg-LCcx-xHsg-SBEuJ2 14:50:39.377371 lvcreate[140414] metadata/lv_manip.c:7256 Creating logical volume osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 14:50:39.377421 lvcreate[140414] metadata/lv_manip.c:4483 Adding segment of type striped to LV osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9. 14:50:39.377466 lvcreate[140414] device_mapper/libdm-config.c:1085 allocation/mirror_logs_require_separate_pvs not found in config: defaulting to 0 14:50:39.377490 lvcreate[140414] metadata/lv_manip.c:3742 Adjusted allocation request to 61047 logical extents. Existing size 0. New size 61047. 14:50:39.377531 lvcreate[140414] device_mapper/libdm-config.c:1085 allocation/maximise_cling not found in config: defaulting to 1 14:50:39.377557 lvcreate[140414] metadata/pv_map.c:53 Allowing allocation on /dev/nvme1n1 start PE 0 length 244190 14:50:39.377582 lvcreate[140414] metadata/lv_manip.c:3448 Trying allocation using contiguous policy. 14:50:39.377607 lvcreate[140414] metadata/lv_manip.c:3047 Areas to be sorted and filled sequentially. 14:50:39.377628 lvcreate[140414] metadata/lv_manip.c:2959 Still need 61047 total extents from 244190 remaining (0 positional slots): 14:50:39.377673 lvcreate[140414] metadata/lv_manip.c:2963 1 (1 data/0 parity) parallel areas of 61047 extents each 14:50:39.377695 lvcreate[140414] metadata/lv_manip.c:2966 0 mirror logs of 0 extents each 14:50:39.377720 lvcreate[140414] metadata/lv_manip.c:2611 Considering allocation area 0 as /dev/nvme1n1 start PE 0 length 61047 leaving 183143. 14:50:39.377753 lvcreate[140414] metadata/lv_manip.c:2193 Allocating parallel area 0 on /dev/nvme1n1 start PE 0 length 61047. 14:50:39.377795 lvcreate[140414] mm/memlock.c:608 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 14:50:39.377825 lvcreate[140414] metadata/pv_manip.c:413 /dev/nvme1n1 0: 0 61047: osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9(0:0) 14:50:39.377856 lvcreate[140414] metadata/pv_manip.c:413 /dev/nvme1n1 1: 61047 183143: NULL(0:0) 14:50:39.377908 lvcreate[140414] device_mapper/libdm-file.c:46 Creating directory "/etc/lvm/archive" 14:50:39.377949 lvcreate[140414] device_mapper/libdm-file.c:102 14:50:39.377970 lvcreate[140414] format_text/archiver.c:124 14:50:39.377997 lvcreate[140414] metadata/metadata.c:2980 14:50:39.378019 lvcreate[140414] metadata/lv_manip.c:9464 14:50:39.378037 lvcreate[140414] metadata/lv_manip.c:9850 14:50:39.378055 lvcreate[140414] lvcreate.c:1810 14:50:39.378077 lvcreate[140414] toollib.c:2232 14:50:39.378099 lvcreate[140414] mm/memlock.c:608 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 14:50:39.378124 lvcreate[140414] activate/fs.c:493 Syncing device names 14:50:39.378150 lvcreate[140414] misc/lvm-flock.c:84 Unlocking /var/lock/lvm/V_ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 14:50:39.378172 lvcreate[140414] misc/lvm-flock.c:47 _undo_flock /var/lock/lvm/V_ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 14:50:39.378226 lvcreate[140414] metadata/vg.c:80 Freeing VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 at 0x7f1d6cff95a0. 14:50:39.378254 lvcreate[140414] metadata/vg.c:80 Freeing VG ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 at 0x7f1d6cff4c70. 14:50:39.378319 lvcreate[140414] device_mapper/libdm-config.c:1085 global/notify_dbus not found in config: defaulting to 1 14:50:39.378344 lvcreate[140414] cache/lvmcache.c:2603 Destroy lvmcache content 14:50:39.440690 lvcreate[140414] lvmcmdline.c:3352 Completed: lvcreate --yes -l 61047 -vvvv -n osd-block-3fdec961-415a-434b-ac5b-d344f9916fe9 ceph-a6df2450-88a1-4fbb-be8f-9c91664c2900 14:50:39.440743 lvcreate[140414] lvmcmdline.c:3855 Internal error: Failed command did not use log_error 14:50:39.440766 lvcreate[140414] lvmcmdline.c:3856 Command failed with status code 5. 14:50:39.441075 lvcreate[140414] cache/lvmcache.c:2603 Destroy lvmcache content 14:50:39.441164 lvcreate[140414] metadata/vg.c:80 Freeing VG #orphans_lvm2 at 0x7f1d6d8241a0. 14:50:39.441508 lvcreate[140414] activate/fs.c:493 Syncing device names ~~~

The important piece from these logs is where it tries to create a directory on the root filesystem and then unwinds.

14:50:39.377908 lvcreate[140414] device_mapper/libdm-file.c:46  Creating directory "/etc/lvm/archive"
14:50:39.377949 lvcreate[140414] device_mapper/libdm-file.c:102  <backtrace>

This is where the command fails because the root filesystem is read-only. At this point I think we know what the problem is, but I don't know what the right solution for Talos to this issue is.

mkdir /rootfs/etc/lvm/archive
mkdir: cannot create directory '/rootfs/etc/lvm/archive': Read-only file system

Edit:

After a bit more reading on LVM, it seems the /etc/lvm/lvm.conf file should be modified in the base image to reflect one of these options.

Disable backup and archive.

backup {
        # Configuration option backup/backup.
        # Maintain a backup of the current metadata configuration.
        # Think very hard before turning this off!
        backup = 0

        # Configuration option backup/archive.
        # Maintain an archive of old metadata configurations.
        # Think very hard before turning this off.
        archive = 0
}

or change the location of those features to a persistent location.

backup {
        # Configuration option backup/backup_dir.
        # Location of the metadata backup files.
        # Remember to back up this directory regularly!
        backup_dir = "/var/lib/lvm/backup"

        # Configuration option backup/archive_dir.
        # Location of the metdata archive files.
        # Remember to back up this directory regularly!
        archive_dir = "/var/lib/lvm/archive"
}
cehoffman commented 1 year ago

For those coming across this, for now I have chosen to get past the LVM issue by adding a file overwrite to the machine config for /etc/lvm/lvm.conf. As I understand the configuration file, it is simply a documentation file with all the defaults of the tool set/commented, so you should be able to use this machineconfig patch setting. I have so far only tried using the full configuration file with the two settings changed though.

machine:
  files:
    - op: overwrite
      path: /etc/lvm/lvm.conf
      permissions: 0o644
      content: |
        backup {
                backup = 0
                archive = 0
        }
smira commented 1 year ago

Thanks for your analysis, with Talos rootfs being read-only, and /var being ephemeral, I would say turning off backups sounds like a more sane solution to me