rook / rook

Storage Orchestration for Kubernetes
https://rook.io
Apache License 2.0
11.95k stars 2.64k forks source link

ceph-volume crashes and osd fails to initialize when creating a disk osd on a NixOS node #14120

Open JustinLex opened 1 week ago

JustinLex commented 1 week ago

Is this a bug report or feature request?

Deviation from expected behavior: ceph-volume crashes with an IndexError during OSD initialization job when trying to create a disk osd with a lvm db-device.

cephosd: configuring new LVM device sda
cephosd: "/dev/h5b_metadata0/h5b_metadata0_2" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_2")
cephosd: using /dev/h5b_metadata0/h5b_metadata0_2 as metadataDevice for device /dev/sda and let ceph-volume lvm batch decide how to create volumes
cephosd: configuring new LVM device sdb
cephosd: "/dev/h5b_metadata0/h5b_metadata0_0" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_0")
cephosd: using /dev/h5b_metadata0/h5b_metadata0_0 as metadataDevice for device /dev/sdb and let ceph-volume lvm batch decide how to create volumes
cephosd: configuring new LVM device sdc
cephosd: "/dev/h5b_metadata0/h5b_metadata0_1" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_1")
cephosd: using /dev/h5b_metadata0/h5b_metadata0_1 as metadataDevice for device /dev/sdc and let ceph-volume lvm batch decide how to create volumes
cephosd: configuring new LVM device sde
cephosd: "/dev/h5b_metadata0/h5b_metadata0_3" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_3")
cephosd: using /dev/h5b_metadata0/h5b_metadata0_3 as metadataDevice for device /dev/sde and let ceph-volume lvm batch decide how to create volumes
exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sde --db-devices /dev/h5b_metadata0/h5b_metadata0_3 --crush-device-class hdd --report
exec: --> passed data devices: 1 physical, 0 LVM
exec: --> relative data size: 1.0
exec: --> passed block_db devices: 0 physical, 1 LVM
exec: Traceback (most recent call last):
exec:   File "/usr/sbin/ceph-volume", line 11, in <module>
exec:     load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__
exec:     self.main(self.argv)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
exec:     return f(*a, **kw)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main
exec:     terminal.dispatch(self.mapper, subcommand_args)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
exec:     instance.main()
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main
exec:     terminal.dispatch(self.mapper, self.argv)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
exec:     instance.main()
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
exec:     return func(*a, **kw)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 401, in main
exec:     plan = self.get_plan(self.args)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 438, in get_plan
exec:     args.wal_devices)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 469, in get_deployment_layout
exec:     fast_type)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 506, in fast_allocations
exec:     ret.extend(get_lvm_fast_allocs(lvm_devs))
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 172, in get_lvm_fast_allocs
exec:     disk.Size(b=int(d.lvs[0].lv_size)), 1) for d in lvs if not
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 173, in <listcomp>
exec:     d.journal_used_by_ceph]
exec: IndexError: list index out of range
rookcmd: failed to configure devices: failed to initialize osd: failed ceph-volume report: exit status 1

My data devices are raw hdd disks with no gpt/mbr, and I'm using a single metadata NVMe device, partitioned with GPT and a single LVM PV. The metadata LVM has 1 LV for each OSD I'm trying to add, for a total of 4 LVs.

I am partitioning the metadata device like this so that I can add/remove OSDs associated with the metadata device in the future without destroying the whole pool.

I referenced the devices manually in CephCluster like so:

    nodes:
      - name: latios
        devices:
          - name: "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF" # WD Red 3TB
            config:
              metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_3"
              encryptedDevice: "true"

I tried both referencing as /dev/h5b_metadata0/h5b_metadata0_3 and as /dev/mapper/h5b_metadata0-h5b_metadata0_3, but neither worked.

I have included my full CephCluster and provision logs below. From the error message, it seems like ceph-volume can't find the LVs associated with a device?

How to reproduce it (minimal and precise):

Right now I'm just testing things before converting to a prod environment, so this is a new cluster with a single storage node. Everything creates but the storage node fails to provision the OSDs.

File(s) to submit:

CephCluster CR ``` apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: # The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw). image: quay.io/ceph/ceph:v18.2.2 # Whether to allow unsupported versions of Ceph. Currently `quincy` and `reef` are supported. # Future versions such as `squid` (v19) would require this to be set to `true`. # Do not set to true in production. allowUnsupported: false # The path on the host where configuration files will be persisted. Must be specified. # Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster. # In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment. dataDirHostPath: /var/lib/rook # Whether or not upgrade should continue even if a check fails # This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise # Use at your OWN risk # To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/latest/ceph-upgrade.html#ceph-version-upgrades skipUpgradeChecks: false # Whether or not continue if PGs are not clean during an upgrade continueUpgradeAfterChecksEvenIfNotHealthy: false # WaitTimeoutForHealthyOSDInMinutes defines the time (in minutes) the operator would wait before an OSD can be stopped for upgrade or restart. # If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one # if `continueUpgradeAfterChecksEvenIfNotHealthy` is `false`. If `continueUpgradeAfterChecksEvenIfNotHealthy` is `true`, then operator would # continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won't be applied if `skipUpgradeChecks` is `true`. # The default wait timeout is 10 minutes. waitTimeoutForHealthyOSDInMinutes: 10 # Whether or not requires PGs are clean before an OSD upgrade. If set to `true` OSD upgrade process won't start until PGs are healthy. # This configuration will be ignored if `skipUpgradeChecks` is `true`. # Default is false. upgradeOSDRequiresHealthyPGs: false mon: count: 3 allowMultiplePerNode: true # FIXME Temporarily enabled until latias is running mgr: # When higher availability of the mgr is needed, increase the count to 2. # In that case, one mgr will be active and one in standby. When Ceph updates which # mgr is active, Rook will update the mgr services to match the active mgr. count: 2 allowMultiplePerNode: false modules: # List of modules to optionally enable or disable. # Note the "dashboard" and "monitoring" modules are already configured by other settings in the cluster CR. - name: rook enabled: true # enable the ceph dashboard for viewing cluster status dashboard: enabled: true port: 8080 ssl: false monitoring: # Whether to enable the prometheus service monitor enabled: true metricsDisabled: false network: connections: # Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network. # The default is false. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted. # When encryption is not enabled, clients still establish a strong initial authentication and data integrity is still validated with a crc check. # IMPORTANT: Encryption requires the 5.11 kernel for the latest nbd and cephfs drivers. Alternatively for testing only, # you can set the "mounter: rbd-nbd" in the rbd storage class, or "mounter: fuse" in the cephfs storage class. # The nbd and fuse drivers are *not* recommended in production since restarting the csi driver pod will disconnect the volumes. encryption: enabled: true # Whether to compress the data in transit across the wire. The default is false. # See the kernel requirements above for encryption. compression: enabled: true # Whether to require communication over msgr2. If true, the msgr v1 port (6789) will be disabled # and clients will be required to connect to the Ceph cluster with the v2 port (3300). # Requires a kernel that supports msgr v2 (kernel 5.11 or CentOS 8.4 or newer). requireMsgr2: true # enable host networking #provider: host ipFamily: IPv6 # enable the crash collector for ceph daemon crash collection crashCollector: disable: false # Uncomment daysToRetain to prune ceph crash entries older than the # specified number of days. #daysToRetain: 30 # enable log collector, daemons will log on files and rotate logCollector: enabled: true periodicity: daily # one of: hourly, daily, weekly, monthly maxLogSize: 5G # SUFFIX may be 'M' or 'G'. Must be at least 1M. # automate [data cleanup process](https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/ceph-teardown.md#delete-the-data-on-hosts) in cluster destruction. cleanupPolicy: # Since cluster cleanup is destructive to data, confirmation is required. # To destroy all Rook data on hosts during uninstall, confirmation must be set to "yes-really-destroy-data". # This value should only be set when the cluster is about to be deleted. After the confirmation is set, # Rook will immediately stop configuring the cluster and only wait for the delete command. # If the empty string is set, Rook will not destroy any data on hosts during uninstall. confirmation: "" # sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion sanitizeDisks: # method indicates if the entire disk should be sanitized or simply ceph's metadata # in both case, re-install is possible # possible choices are 'complete' or 'quick' (default) method: quick # dataSource indicate where to get random bytes from to write on the disk # possible choices are 'zero' (default) or 'random' # using random sources will consume entropy from the system and will take much more time then the zero source dataSource: zero # iteration overwrite N times instead of the default (1) # takes an integer value iteration: 1 # allowUninstallWithVolumes defines how the uninstall should be performed # If set to true, cephCluster deletion does not wait for the PVs to be deleted. allowUninstallWithVolumes: false # To control where various services will be scheduled by kubernetes, use the placement configuration sections below. # The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and # tolerate taints with a key of 'storage-node'. # placement: # all: # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: role # operator: In # values: # - storage-node # podAffinity: # podAntiAffinity: # topologySpreadConstraints: # tolerations: # - key: storage-node # operator: Exists # The above placement information can also be specified for mon, osd, and mgr components # mon: # Monitor deployments may contain an anti-affinity rule for avoiding monitor # collocation on the same node. This is a required rule when host network is used # or when AllowMultiplePerNode is false. Otherwise this anti-affinity rule is a # preferred rule with weight: 50. # osd: # prepareosd: # mgr: # cleanup: placement: # Allow placing monitors on control plane to maintain quorum mon: tolerations: - effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: Exists annotations: # all: # mon: # osd: # cleanup: # prepareosd: # clusterMetadata annotations will be applied to only `rook-ceph-mon-endpoints` configmap and the `rook-ceph-mon` and `rook-ceph-admin-keyring` secrets. # And clusterMetadata annotations will not be merged with `all` annotations. # clusterMetadata: # kubed.appscode.com/sync: "true" # If no mgr annotations are set, prometheus scrape annotations will be set by default. # mgr: labels: # all: # mon: # osd: # cleanup: # mgr: # prepareosd: # These labels are applied to ceph-exporter servicemonitor only # exporter: # monitoring is a list of key-value pairs. It is injected into all the monitoring resources created by operator. # These labels can be passed as LabelSelector to Prometheus # monitoring: # crashcollector: resources: #The requests and limits set here, allow the mgr pod to use half of one CPU core and 1 gigabyte of memory mgr: limits: memory: "1024Mi" requests: cpu: "100m" memory: "1024Mi" mon: limits: memory: "2048Mi" requests: cpu: "100m" memory: "2048Mi" osd: limits: memory: "2048Mi" requests: cpu: "200m" memory: "2048Mi" # For OSD it also is a possible to specify requests/limits based on device class # osd-hdd: # osd-ssd: # osd-nvme: # prepareosd: # mgr-sidecar: # crashcollector: # logcollector: # cleanup: # exporter: # The option to automatically remove OSDs that are out and are safe to destroy. removeOSDsIfOutAndSafeToRemove: false priorityClassNames: #all: rook-ceph-default-priority-class mon: system-node-critical osd: system-node-critical mgr: system-cluster-critical #crashcollector: rook-ceph-crashcollector-priority-class storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false #deviceFilter: config: # crushRoot: "custom-root" # specify a non-default root label for the CRUSH map # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore. # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB # osdsPerDevice: "1" # this value can be overridden at the node or device level # encryptedDevice: "true" # the default value for this option is "false" # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label. nodes: - name: latios devices: # specific devices to use for storage can be specified for each node # - name: "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY" # WD Red 3TB # FIXME Left out for now to test adding storage - name: "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF" # WD Red 3TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_3" encryptedDevice: "true" - name: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1" # WD Red 4TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_2" encryptedDevice: "true" - name: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN" # WD Red 4TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_1" encryptedDevice: "true" - name: "/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG" # Toshiba MG08 16TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_0" encryptedDevice: "true" # - name: "nvme01" # multiple osds can be created on high performance devices # config: # osdsPerDevice: "5" # config: # configuration can be specified at the node level which overrides the cluster level config # when onlyApplyOSDPlacement is false, will merge both placement.All() and placement.osd onlyApplyOSDPlacement: false # Time for which an OSD pod will sleep before restarting, if it stopped due to flapping # flappingRestartIntervalHours: 24 # The section for configuring management of daemon disruptions during upgrade or fencing. disruptionManagement: # If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically # via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will # block eviction of OSDs by default and unblock them safely when drains are detected. managePodBudgets: true # A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the # default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes. osdMaintenanceTimeout: 30 # A duration in minutes that the operator will wait for the placement groups to become healthy (active+clean) after a drain was completed and OSDs came back up. # Operator will continue with the next drain if the timeout exceeds. It only works if `managePodBudgets` is `true`. # No values or 0 means that the operator will wait until the placement groups are healthy before unblocking the next drain. pgHealthCheckTimeout: 0 # csi defines CSI Driver settings applied per cluster. csi: readAffinity: # Enable read affinity to enable clients to optimize reads from an OSD in the same topology. # Enabling the read affinity may cause the OSDs to consume some extra memory. # For more details see this doc: # https://rook.io/docs/rook/latest/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-read-affinity-for-rbd-volumes enabled: true # cephfs driver specific settings. cephfs: # Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. # kernelMountOptions: "" # Set CephFS Fuse mount options to use https://docs.ceph.com/en/quincy/man/8/ceph-fuse/#options. # fuseMountOptions: "" # healthChecks # Valid values for daemons are 'mon', 'osd', 'status' healthCheck: daemonHealth: mon: disabled: false interval: 45s osd: disabled: false interval: 60s status: disabled: false interval: 60s # Change pod liveness probe timing or threshold values. Works for all mon,mgr,osd daemons. livenessProbe: mon: disabled: false mgr: disabled: false osd: disabled: false # Change pod startup probe timing or threshold values. Works for all mon,mgr,osd daemons. startupProbe: mon: disabled: false mgr: disabled: false osd: disabled: false ```

Logs to submit:

See comment.

Environment:

lsblk:

NAME                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                 8:0    0   3.6T  0 disk 
sdb                                 8:16   0  14.6T  0 disk 
sdc                                 8:32   0   3.6T  0 disk 
sdd                                 8:48   0   2.7T  0 disk 
sde                                 8:64   0   2.7T  0 disk 
sdf                                 8:80   1  57.3G  0 disk 
├─sdf1                              8:81   1   236M  0 part /boot
└─sdf2                              8:82   1  57.1G  0 part /nix/store
                                                            /
sr0                                11:0    1  1024M  0 rom  
nbd0                               43:0    0     0B  0 disk 
nbd1                               43:32   0     0B  0 disk 
nbd2                               43:64   0     0B  0 disk 
nbd3                               43:96   0     0B  0 disk 
nbd4                               43:128  0     0B  0 disk 
nbd5                               43:160  0     0B  0 disk 
nbd6                               43:192  0     0B  0 disk 
nbd7                               43:224  0     0B  0 disk 
zram0                             253:0    0  62.8G  0 disk [SWAP]
nvme0n1                           259:0    0 476.9G  0 disk 
└─nvme0n1p1                       259:1    0 476.9G  0 part /var
nvme1n1                           259:2    0   1.8T  0 disk 
└─nvme1n1p1                       259:3    0   1.8T  0 part 
  ├─h5b_metadata0-h5b_metadata0_0 254:0    0   800G  0 lvm  
  ├─h5b_metadata0-h5b_metadata0_1 254:1    0   200G  0 lvm  
  ├─h5b_metadata0-h5b_metadata0_2 254:2    0   200G  0 lvm  
  └─h5b_metadata0-h5b_metadata0_3 254:3    0   150G  0 lvm  
nbd8                               43:256  0     0B  0 disk 
nbd9                               43:288  0     0B  0 disk 
nbd10                              43:320  0     0B  0 disk 
nbd11                              43:352  0     0B  0 disk 
nbd12                              43:384  0     0B  0 disk 
nbd13                              43:416  0     0B  0 disk 
nbd14                              43:448  0     0B  0 disk 
nbd15                              43:480  0     0B  0 disk

lvs:

  LV              VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  h5b_metadata0_0 h5b_metadata0 -wi-a----- 800.00g                                                    
  h5b_metadata0_1 h5b_metadata0 -wi-a----- 200.00g                                                    
  h5b_metadata0_2 h5b_metadata0 -wi-a----- 200.00g                                                    
  h5b_metadata0_3 h5b_metadata0 -wi-a----- 150.00g                                                    

pvs:

  PV             VG            Fmt  Attr PSize  PFree  
  /dev/nvme1n1p1 h5b_metadata0 lvm2 a--  <1.82t 513.01g
JustinLex commented 1 week ago
rook-ceph-osd-prepare job log ``` 2024/04/23 16:13:24 maxprocs: Leaving GOMAXPROCS=48: CPU quota undefined 2024-04-23 16:13:24.357112 I | cephcmd: desired devices to configure osds: [{Name:/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_3 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_2 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_1 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_0 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false}] 2024-04-23 16:13:24.358046 I | rookcmd: starting Rook v1.14.0 with arguments '/rook/rook ceph osd provision' 2024-04-23 16:13:24.358062 I | rookcmd: flag values: --cluster-id=d95e53f2-57c5-45e4-a418-33563d259de7, --cluster-name=rook-ceph, --data-device-filter=, --data-device-path-filter=, --data-devices=[{"id":"/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF","storeConfig":{"osdsPerDevice":1,"encryptedDevice":true,"metadataDevice":"/dev/h5b_metadata0/h5b_metadata0_3"}},{"id":"/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1","storeConfig":{"osdsPerDevice":1,"encryptedDevice":true,"metadataDevice":"/dev/h5b_metadata0/h5b_metadata0_2"}},{"id":"/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN","storeConfig":{"osdsPerDevice":1,"encryptedDevice":true,"metadataDevice":"/dev/h5b_metadata0/h5b_metadata0_1"}},{"id":"/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG","storeConfig":{"osdsPerDevice":1,"encryptedDevice":true,"metadataDevice":"/dev/h5b_metadata0/h5b_metadata0_0"}}], --encrypted-device=false, --force-format=false, --help=false, --location=, --log-level=DEBUG, --metadata-device=, --node-name=latios, --osd-crush-device-class=, --osd-crush-initial-weight=, --osd-database-size=0, --osd-store-type=bluestore, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --replace-osd=-1 2024-04-23 16:13:24.358068 I | ceph-spec: parsing mon endpoints: c=[2600:70ff:b04f:beef:1::c867]:3300,a=[2600:70ff:b04f:beef:1::f497]:3300,b=[2600:70ff:b04f:beef:1::3bdc]:3300 2024-04-23 16:13:24.364109 I | op-osd: CRUSH location=root=default host=latios 2024-04-23 16:13:24.364129 I | cephcmd: crush location of osd: root=default host=latios 2024-04-23 16:13:24.365985 D | cephclient: No ceph configuration override to merge as "rook-config-override" configmap is empty 2024-04-23 16:13:24.366014 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2024-04-23 16:13:24.366153 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2024-04-23 16:13:24.366373 D | cephclient: config file @ /etc/ceph/ceph.conf: [global] fsid = 011d40ab-c0e6-41aa-a3c6-36d6fb52c2f3 mon initial members = c a b mon host = [v2:[2600:70ff:b04f:beef:1::c867]:3300],[v2:[2600:70ff:b04f:beef:1::f497]:3300],[v2:[2600:70ff:b04f:beef:1::3bdc]:3300] [client.admin] keyring = /var/lib/rook/rook-ceph/client.admin.keyring 2024-04-23 16:13:24.366393 D | exec: Running command: dmsetup version 2024-04-23 16:13:24.372141 I | cephosd: Library version: 1.02.181-RHEL8 (2021-10-20) Driver version: 4.47.0 2024-04-23 16:13:24.376584 I | cephosd: discovering hardware 2024-04-23 16:13:24.376602 D | exec: Running command: lsblk --all --noheadings --list --output KNAME 2024-04-23 16:13:24.386492 D | exec: Running command: lsblk /dev/loop0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.390944 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.390967 W | inventory: skipping device "loop0". exit status 32 2024-04-23 16:13:24.390976 D | exec: Running command: lsblk /dev/loop1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.395586 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.395598 W | inventory: skipping device "loop1". exit status 32 2024-04-23 16:13:24.395605 D | exec: Running command: lsblk /dev/loop2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.399644 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.399657 W | inventory: skipping device "loop2". exit status 32 2024-04-23 16:13:24.399665 D | exec: Running command: lsblk /dev/loop3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.403691 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.403704 W | inventory: skipping device "loop3". exit status 32 2024-04-23 16:13:24.403712 D | exec: Running command: lsblk /dev/loop4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.407860 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.407872 W | inventory: skipping device "loop4". exit status 32 2024-04-23 16:13:24.407879 D | exec: Running command: lsblk /dev/loop5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.412469 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.412481 W | inventory: skipping device "loop5". exit status 32 2024-04-23 16:13:24.412490 D | exec: Running command: lsblk /dev/loop6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.416652 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.416680 W | inventory: skipping device "loop6". exit status 32 2024-04-23 16:13:24.416687 D | exec: Running command: lsblk /dev/loop7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.421125 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.421136 W | inventory: skipping device "loop7". exit status 32 2024-04-23 16:13:24.421143 D | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.428995 D | sys: lsblk output: "SIZE=\"4000787030016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sda\" KNAME=\"/dev/sda\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.429066 D | exec: Running command: sgdisk --print /dev/sda 2024-04-23 16:13:24.432579 D | exec: Running command: udevadm info --query=property /dev/sda 2024-04-23 16:13:24.442073 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/wwn-0x50014ee20fcbafe2 /dev/disk/by-diskseq/3 /dev/disk/by-path/pci-0000:63:00.2-ata-4.0 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 /dev/disk/by-path/pci-0000:63:00.2-ata-4\nDEVNAME=/dev/sda\nDEVPATH=/devices/pci0000:60/0000:60:08.1/0000:63:00.2/ata8/host7/target7:0:0/7:0:0:0/block/sda\nDEVTYPE=disk\nDISKSEQ=3\nID_ATA=1\nID_ATA_DOWNLOAD_MICROCODE=1\nID_ATA_FEATURE_SET_HPA=1\nID_ATA_FEATURE_SET_HPA_ENABLED=1\nID_ATA_FEATURE_SET_PM=1\nID_ATA_FEATURE_SET_PM_ENABLED=1\nID_ATA_FEATURE_SET_PUIS=1\nID_ATA_FEATURE_SET_PUIS_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY=1\nID_ATA_FEATURE_SET_SECURITY_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=66032\nID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=66032\nID_ATA_FEATURE_SET_SMART=1\nID_ATA_FEATURE_SET_SMART_ENABLED=1\nID_ATA_PERIPHERAL_DEVICE_TYPE=0\nID_ATA_ROTATION_RATE_RPM=5400\nID_ATA_SATA=1\nID_ATA_SATA_SIGNAL_RATE_GEN1=1\nID_ATA_SATA_SIGNAL_RATE_GEN2=1\nID_ATA_WRITE_CACHE=1\nID_ATA_WRITE_CACHE_ENABLED=1\nID_BUS=ata\nID_MODEL=WDC_WD40EFRX-68N32N0\nID_MODEL_ENC=WDC\\x20WD40EFRX-68N32N0\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\nID_PATH=pci-0000:63:00.2-ata-4.0\nID_PATH_ATA_COMPAT=pci-0000:63:00.2-ata-4\nID_PATH_TAG=pci-0000_63_00_2-ata-4_0\nID_REVISION=82.00A82\nID_SERIAL=WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1\nID_SERIAL_SHORT=WD-WCC7K4TDVUD1\nID_TYPE=disk\nID_WWN=0x50014ee20fcbafe2\nID_WWN_WITH_EXTENSION=0x50014ee20fcbafe2\nMAJOR=8\nMINOR=0\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=2448576" 2024-04-23 16:13:24.442116 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sda 2024-04-23 16:13:24.446472 D | exec: Running command: lsblk /dev/sdb --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.454210 D | sys: lsblk output: "SIZE=\"16000900661248\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdb\" KNAME=\"/dev/sdb\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.454255 D | exec: Running command: sgdisk --print /dev/sdb 2024-04-23 16:13:24.457667 D | exec: Running command: udevadm info --query=property /dev/sdb 2024-04-23 16:13:24.466365 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG /dev/disk/by-path/pci-0000:63:00.2-ata-2.0 /dev/disk/by-diskseq/4 /dev/disk/by-path/pci-0000:63:00.2-ata-2 /dev/disk/by-id/wwn-0x5000039af8cb5ed8\nDEVNAME=/dev/sdb\nDEVPATH=/devices/pci0000:60/0000:60:08.1/0000:63:00.2/ata6/host5/target5:0:0/5:0:0:0/block/sdb\nDEVTYPE=disk\nDISKSEQ=4\nID_ATA=1\nID_ATA_DOWNLOAD_MICROCODE=1\nID_ATA_FEATURE_SET_APM=1\nID_ATA_FEATURE_SET_APM_CURRENT_VALUE=128\nID_ATA_FEATURE_SET_APM_ENABLED=1\nID_ATA_FEATURE_SET_PM=1\nID_ATA_FEATURE_SET_PM_ENABLED=1\nID_ATA_FEATURE_SET_SECURITY=1\nID_ATA_FEATURE_SET_SECURITY_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=66886\nID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=66886\nID_ATA_FEATURE_SET_SMART=1\nID_ATA_FEATURE_SET_SMART_ENABLED=1\nID_ATA_PERIPHERAL_DEVICE_TYPE=0\nID_ATA_ROTATION_RATE_RPM=7200\nID_ATA_SATA=1\nID_ATA_SATA_SIGNAL_RATE_GEN1=1\nID_ATA_SATA_SIGNAL_RATE_GEN2=1\nID_ATA_WRITE_CACHE=1\nID_ATA_WRITE_CACHE_ENABLED=1\nID_BUS=ata\nID_MODEL=TOSHIBA_MG08ACA16TE\nID_MODEL_ENC=TOSHIBA\\x20MG08ACA16TE\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\nID_PATH=pci-0000:63:00.2-ata-2.0\nID_PATH_ATA_COMPAT=pci-0000:63:00.2-ata-2\nID_PATH_TAG=pci-0000_63_00_2-ata-2_0\nID_REVISION=0102\nID_SERIAL=TOSHIBA_MG08ACA16TE_6180A1PCFVGG\nID_SERIAL_SHORT=6180A1PCFVGG\nID_TYPE=disk\nID_WWN=0x5000039af8cb5ed8\nID_WWN_WITH_EXTENSION=0x5000039af8cb5ed8\nMAJOR=8\nMINOR=16\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=2440146" 2024-04-23 16:13:24.466399 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sdb 2024-04-23 16:13:24.470591 D | exec: Running command: lsblk /dev/sdc --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.478268 D | sys: lsblk output: "SIZE=\"4000787030016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdc\" KNAME=\"/dev/sdc\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.478314 D | exec: Running command: sgdisk --print /dev/sdc 2024-04-23 16:13:24.482729 D | exec: Running command: udevadm info --query=property /dev/sdc 2024-04-23 16:13:24.491875 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/wwn-0x50014ee2122965ae /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN /dev/disk/by-path/pci-0000:63:00.2-ata-5.0 /dev/disk/by-diskseq/5 /dev/disk/by-path/pci-0000:63:00.2-ata-5\nDEVNAME=/dev/sdc\nDEVPATH=/devices/pci0000:60/0000:60:08.1/0000:63:00.2/ata9/host8/target8:0:0/8:0:0:0/block/sdc\nDEVTYPE=disk\nDISKSEQ=5\nID_ATA=1\nID_ATA_DOWNLOAD_MICROCODE=1\nID_ATA_FEATURE_SET_HPA=1\nID_ATA_FEATURE_SET_HPA_ENABLED=1\nID_ATA_FEATURE_SET_PM=1\nID_ATA_FEATURE_SET_PM_ENABLED=1\nID_ATA_FEATURE_SET_PUIS=1\nID_ATA_FEATURE_SET_PUIS_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY=1\nID_ATA_FEATURE_SET_SECURITY_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=66012\nID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=66012\nID_ATA_FEATURE_SET_SMART=1\nID_ATA_FEATURE_SET_SMART_ENABLED=1\nID_ATA_PERIPHERAL_DEVICE_TYPE=0\nID_ATA_ROTATION_RATE_RPM=5400\nID_ATA_SATA=1\nID_ATA_SATA_SIGNAL_RATE_GEN1=1\nID_ATA_SATA_SIGNAL_RATE_GEN2=1\nID_ATA_WRITE_CACHE=1\nID_ATA_WRITE_CACHE_ENABLED=1\nID_BUS=ata\nID_MODEL=WDC_WD40EFRX-68N32N0\nID_MODEL_ENC=WDC\\x20WD40EFRX-68N32N0\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\nID_PATH=pci-0000:63:00.2-ata-5.0\nID_PATH_ATA_COMPAT=pci-0000:63:00.2-ata-5\nID_PATH_TAG=pci-0000_63_00_2-ata-5_0\nID_REVISION=82.00A82\nID_SERIAL=WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN\nID_SERIAL_SHORT=WD-WCC7K6YX6UXN\nID_TYPE=disk\nID_WWN=0x50014ee2122965ae\nID_WWN_WITH_EXTENSION=0x50014ee2122965ae\nMAJOR=8\nMINOR=32\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=2452028" 2024-04-23 16:13:24.491905 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sdc 2024-04-23 16:13:24.496212 D | exec: Running command: lsblk /dev/sdd --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.505194 D | sys: lsblk output: "SIZE=\"3000592982016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdd\" KNAME=\"/dev/sdd\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.505240 D | exec: Running command: sgdisk --print /dev/sdd 2024-04-23 16:13:24.508590 D | exec: Running command: udevadm info --query=property /dev/sdd 2024-04-23 16:13:24.517584 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-path/pci-0000:63:00.2-ata-7.0 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY /dev/disk/by-path/pci-0000:63:00.2-ata-7 /dev/disk/by-diskseq/6 /dev/disk/by-id/wwn-0x50014ee2630d07fd\nDEVNAME=/dev/sdd\nDEVPATH=/devices/pci0000:60/0000:60:08.1/0000:63:00.2/ata11/host10/target10:0:0/10:0:0:0/block/sdd\nDEVTYPE=disk\nDISKSEQ=6\nID_ATA=1\nID_ATA_DOWNLOAD_MICROCODE=1\nID_ATA_FEATURE_SET_HPA=1\nID_ATA_FEATURE_SET_HPA_ENABLED=1\nID_ATA_FEATURE_SET_PM=1\nID_ATA_FEATURE_SET_PM_ENABLED=1\nID_ATA_FEATURE_SET_PUIS=1\nID_ATA_FEATURE_SET_PUIS_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY=1\nID_ATA_FEATURE_SET_SECURITY_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=422\nID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=422\nID_ATA_FEATURE_SET_SMART=1\nID_ATA_FEATURE_SET_SMART_ENABLED=1\nID_ATA_PERIPHERAL_DEVICE_TYPE=0\nID_ATA_ROTATION_RATE_RPM=5400\nID_ATA_SATA=1\nID_ATA_SATA_SIGNAL_RATE_GEN1=1\nID_ATA_SATA_SIGNAL_RATE_GEN2=1\nID_ATA_WRITE_CACHE=1\nID_ATA_WRITE_CACHE_ENABLED=1\nID_BUS=ata\nID_MODEL=WDC_WD30EFRX-68EUZN0\nID_MODEL_ENC=WDC\\x20WD30EFRX-68EUZN0\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\nID_PATH=pci-0000:63:00.2-ata-7.0\nID_PATH_ATA_COMPAT=pci-0000:63:00.2-ata-7\nID_PATH_TAG=pci-0000_63_00_2-ata-7_0\nID_REVISION=82.00A82\nID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY\nID_SERIAL_SHORT=WD-WCC4N1DL3AJY\nID_TYPE=disk\nID_WWN=0x50014ee2630d07fd\nID_WWN_WITH_EXTENSION=0x50014ee2630d07fd\nMAJOR=8\nMINOR=48\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=2436408" 2024-04-23 16:13:24.517614 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sdd 2024-04-23 16:13:24.521834 D | exec: Running command: lsblk /dev/sde --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.529548 D | sys: lsblk output: "SIZE=\"3000592982016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sde\" KNAME=\"/dev/sde\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.529588 D | exec: Running command: sgdisk --print /dev/sde 2024-04-23 16:13:24.533294 D | exec: Running command: udevadm info --query=property /dev/sde 2024-04-23 16:13:24.543601 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-path/pci-0000:63:00.2-ata-8.0 /dev/disk/by-diskseq/7 /dev/disk/by-path/pci-0000:63:00.2-ata-8 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF /dev/disk/by-id/wwn-0x50014ee20d320712\nDEVNAME=/dev/sde\nDEVPATH=/devices/pci0000:60/0000:60:08.1/0000:63:00.2/ata12/host11/target11:0:0/11:0:0:0/block/sde\nDEVTYPE=disk\nDISKSEQ=7\nID_ATA=1\nID_ATA_DOWNLOAD_MICROCODE=1\nID_ATA_FEATURE_SET_HPA=1\nID_ATA_FEATURE_SET_HPA_ENABLED=1\nID_ATA_FEATURE_SET_PM=1\nID_ATA_FEATURE_SET_PM_ENABLED=1\nID_ATA_FEATURE_SET_PUIS=1\nID_ATA_FEATURE_SET_PUIS_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY=1\nID_ATA_FEATURE_SET_SECURITY_ENABLED=0\nID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=416\nID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=416\nID_ATA_FEATURE_SET_SMART=1\nID_ATA_FEATURE_SET_SMART_ENABLED=1\nID_ATA_PERIPHERAL_DEVICE_TYPE=0\nID_ATA_ROTATION_RATE_RPM=5400\nID_ATA_SATA=1\nID_ATA_SATA_SIGNAL_RATE_GEN1=1\nID_ATA_SATA_SIGNAL_RATE_GEN2=1\nID_ATA_WRITE_CACHE=1\nID_ATA_WRITE_CACHE_ENABLED=1\nID_BUS=ata\nID_MODEL=WDC_WD30EFRX-68EUZN0\nID_MODEL_ENC=WDC\\x20WD30EFRX-68EUZN0\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\nID_PATH=pci-0000:63:00.2-ata-8.0\nID_PATH_ATA_COMPAT=pci-0000:63:00.2-ata-8\nID_PATH_TAG=pci-0000_63_00_2-ata-8_0\nID_REVISION=82.00A82\nID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF\nID_SERIAL_SHORT=WD-WCC4N5CLHYZF\nID_TYPE=disk\nID_WWN=0x50014ee20d320712\nID_WWN_WITH_EXTENSION=0x50014ee20d320712\nMAJOR=8\nMINOR=64\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=2439933" 2024-04-23 16:13:24.543632 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sde 2024-04-23 16:13:24.548172 D | exec: Running command: lsblk /dev/sdf --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.556170 D | sys: lsblk output: "SIZE=\"61530439680\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdf\" KNAME=\"/dev/sdf\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.556226 D | exec: Running command: sgdisk --print /dev/sdf 2024-04-23 16:13:24.559976 D | exec: Running command: udevadm info --query=property /dev/sdf 2024-04-23 16:13:24.570492 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-path/pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0 /dev/disk/by-path/pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0 /dev/disk/by-diskseq/10 /dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0\nDEVNAME=/dev/sdf\nDEVPATH=/devices/pci0000:00/0000:00:07.1/0000:06:00.3/usb4/4-1/4-1:1.0/host12/target12:0:0/12:0:0:0/block/sdf\nDEVTYPE=disk\nDISKSEQ=10\nID_BUS=usb\nID_INSTANCE=0:0\nID_MODEL=SanDisk_3.2Gen1\nID_MODEL_ENC=\\x20SanDisk\\x203.2Gen1\nID_MODEL_ID=5583\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=97fd5997-f390-0b4a-a3f8-d106c1723aea\nID_PATH=pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_06_00_3-usb-0_1_1_0-scsi-0_0_0_0\nID_PATH_WITH_USB_REVISION=pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0\nID_REVISION=1.00\nID_SERIAL=USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0\nID_SERIAL_SHORT=0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INSTANCE=0:0\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_USB_MODEL=SanDisk_3.2Gen1\nID_USB_MODEL_ENC=\\x20SanDisk\\x203.2Gen1\nID_USB_MODEL_ID=5583\nID_USB_REVISION=1.00\nID_USB_SERIAL=USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0\nID_USB_SERIAL_SHORT=0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a\nID_USB_TYPE=disk\nID_USB_VENDOR=USB\nID_USB_VENDOR_ENC=\\x20USB\\x20\\x20\\x20\\x20\nID_USB_VENDOR_ID=0781\nID_VENDOR=USB\nID_VENDOR_ENC=\\x20USB\\x20\\x20\\x20\\x20\nID_VENDOR_ID=0781\nMAJOR=8\nMINOR=80\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=3503279" 2024-04-23 16:13:24.570540 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sdf 2024-04-23 16:13:24.576285 I | inventory: skipping device "sdf" because it has child, considering the child instead. 2024-04-23 16:13:24.576309 D | exec: Running command: lsblk /dev/sdf1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.589643 D | sys: lsblk output: "SIZE=\"247463936\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sdf\" NAME=\"/dev/sdf1\" KNAME=\"/dev/sdf1\" MOUNTPOINT=\"/rootfs/boot\" FSTYPE=\"vfat\"" 2024-04-23 16:13:24.589676 D | exec: Running command: udevadm info --query=property /dev/sdf1 2024-04-23 16:13:24.599701 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0-part1 /dev/disk/by-label/ESP /dev/disk/by-partuuid/1c06f03b-704e-4657-b9cd-681a087a2fdc /dev/disk/by-path/pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0-part1 /dev/disk/by-diskseq/10-part1 /dev/disk/by-uuid/12CE-A600 /dev/disk/by-path/pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0-part1 /dev/disk/by-partlabel/ESP\nDEVNAME=/dev/sdf1\nDEVPATH=/devices/pci0000:00/0000:00:07.1/0000:06:00.3/usb4/4-1/4-1:1.0/host12/target12:0:0/12:0:0:0/block/sdf/sdf1\nDEVTYPE=partition\nDISKSEQ=10\nID_BUS=usb\nID_FS_BLOCKSIZE=4096\nID_FS_LABEL=ESP\nID_FS_LABEL_ENC=ESP\nID_FS_SIZE=247435776\nID_FS_TYPE=vfat\nID_FS_USAGE=filesystem\nID_FS_UUID=12CE-A600\nID_FS_UUID_ENC=12CE-A600\nID_FS_VERSION=FAT16\nID_INSTANCE=0:0\nID_MODEL=SanDisk_3.2Gen1\nID_MODEL_ENC=\\x20SanDisk\\x203.2Gen1\nID_MODEL_ID=5583\nID_PART_ENTRY_DISK=8:80\nID_PART_ENTRY_NAME=ESP\nID_PART_ENTRY_NUMBER=1\nID_PART_ENTRY_OFFSET=16384\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=483328\nID_PART_ENTRY_TYPE=c12a7328-f81f-11d2-ba4b-00a0c93ec93b\nID_PART_ENTRY_UUID=1c06f03b-704e-4657-b9cd-681a087a2fdc\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=97fd5997-f390-0b4a-a3f8-d106c1723aea\nID_PATH=pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_06_00_3-usb-0_1_1_0-scsi-0_0_0_0\nID_PATH_WITH_USB_REVISION=pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0\nID_REVISION=1.00\nID_SERIAL=USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0\nID_SERIAL_SHORT=0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INSTANCE=0:0\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_USB_MODEL=SanDisk_3.2Gen1\nID_USB_MODEL_ENC=\\x20SanDisk\\x203.2Gen1\nID_USB_MODEL_ID=5583\nID_USB_REVISION=1.00\nID_USB_SERIAL=USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0\nID_USB_SERIAL_SHORT=0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a\nID_USB_TYPE=disk\nID_USB_VENDOR=USB\nID_USB_VENDOR_ENC=\\x20USB\\x20\\x20\\x20\\x20\nID_USB_VENDOR_ID=0781\nID_VENDOR=USB\nID_VENDOR_ENC=\\x20USB\\x20\\x20\\x20\\x20\nID_VENDOR_ID=0781\nMAJOR=8\nMINOR=81\nPARTN=1\nPARTNAME=ESP\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUDISKS_IGNORE=1\nUSEC_INITIALIZED=3503317" 2024-04-23 16:13:24.599770 D | exec: Running command: lsblk /dev/sdf2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.608193 D | sys: lsblk output: "SIZE=\"61274570240\" ROTA=\"1\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/sdf\" NAME=\"/dev/sdf2\" KNAME=\"/dev/sdf2\" MOUNTPOINT=\"/rootfs\" FSTYPE=\"ext4\"" 2024-04-23 16:13:24.608215 D | exec: Running command: udevadm info --query=property /dev/sdf2 2024-04-23 16:13:24.617368 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-path/pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0-part2 /dev/disk/by-label/nixos /dev/disk/by-diskseq/10-part2 /dev/disk/by-partlabel/primary /dev/disk/by-partuuid/f222513b-ded1-49fa-b591-20ce86a2fe7f /dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0-part2 /dev/disk/by-path/pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0-part2 /dev/disk/by-uuid/f222513b-ded1-49fa-b591-20ce86a2fe7f /dev/root\nDEVNAME=/dev/sdf2\nDEVPATH=/devices/pci0000:00/0000:00:07.1/0000:06:00.3/usb4/4-1/4-1:1.0/host12/target12:0:0/12:0:0:0/block/sdf/sdf2\nDEVTYPE=partition\nDISKSEQ=10\nID_BUS=usb\nID_FS_BLOCKSIZE=4096\nID_FS_LABEL=nixos\nID_FS_LABEL_ENC=nixos\nID_FS_LASTBLOCK=14959611\nID_FS_SIZE=61274566656\nID_FS_TYPE=ext4\nID_FS_USAGE=filesystem\nID_FS_UUID=f222513b-ded1-49fa-b591-20ce86a2fe7f\nID_FS_UUID_ENC=f222513b-ded1-49fa-b591-20ce86a2fe7f\nID_FS_VERSION=1.0\nID_INSTANCE=0:0\nID_MODEL=SanDisk_3.2Gen1\nID_MODEL_ENC=\\x20SanDisk\\x203.2Gen1\nID_MODEL_ID=5583\nID_PART_ENTRY_DISK=8:80\nID_PART_ENTRY_NAME=primary\nID_PART_ENTRY_NUMBER=2\nID_PART_ENTRY_OFFSET=499712\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=119676895\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_UUID=f222513b-ded1-49fa-b591-20ce86a2fe7f\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=97fd5997-f390-0b4a-a3f8-d106c1723aea\nID_PATH=pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0\nID_PATH_TAG=pci-0000_06_00_3-usb-0_1_1_0-scsi-0_0_0_0\nID_PATH_WITH_USB_REVISION=pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0\nID_REVISION=1.00\nID_SERIAL=USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0\nID_SERIAL_SHORT=0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a\nID_TYPE=disk\nID_USB_DRIVER=usb-storage\nID_USB_INSTANCE=0:0\nID_USB_INTERFACES=:080650:\nID_USB_INTERFACE_NUM=00\nID_USB_MODEL=SanDisk_3.2Gen1\nID_USB_MODEL_ENC=\\x20SanDisk\\x203.2Gen1\nID_USB_MODEL_ID=5583\nID_USB_REVISION=1.00\nID_USB_SERIAL=USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0\nID_USB_SERIAL_SHORT=0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a\nID_USB_TYPE=disk\nID_USB_VENDOR=USB\nID_USB_VENDOR_ENC=\\x20USB\\x20\\x20\\x20\\x20\nID_USB_VENDOR_ID=0781\nID_VENDOR=USB\nID_VENDOR_ENC=\\x20USB\\x20\\x20\\x20\\x20\nID_VENDOR_ID=0781\nMAJOR=8\nMINOR=82\nPARTN=2\nPARTNAME=primary\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=3503632" 2024-04-23 16:13:24.617403 D | exec: Running command: lsblk /dev/sr0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.625245 D | sys: lsblk output: "SIZE=\"1073741312\" ROTA=\"1\" RO=\"0\" TYPE=\"rom\" PKNAME=\"\" NAME=\"/dev/sr0\" KNAME=\"/dev/sr0\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.625265 W | inventory: skipping device "sr0". unsupported diskType rom 2024-04-23 16:13:24.625273 D | exec: Running command: lsblk /dev/nbd0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.629476 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.629486 W | inventory: skipping device "nbd0". exit status 32 2024-04-23 16:13:24.629494 D | exec: Running command: lsblk /dev/nbd1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.633579 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.633590 W | inventory: skipping device "nbd1". exit status 32 2024-04-23 16:13:24.633597 D | exec: Running command: lsblk /dev/nbd2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.637769 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.637780 W | inventory: skipping device "nbd2". exit status 32 2024-04-23 16:13:24.637788 D | exec: Running command: lsblk /dev/nbd3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.641878 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.641888 W | inventory: skipping device "nbd3". exit status 32 2024-04-23 16:13:24.641897 D | exec: Running command: lsblk /dev/nbd4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.646449 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.646460 W | inventory: skipping device "nbd4". exit status 32 2024-04-23 16:13:24.646467 D | exec: Running command: lsblk /dev/nbd5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.650558 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.650568 W | inventory: skipping device "nbd5". exit status 32 2024-04-23 16:13:24.650577 D | exec: Running command: lsblk /dev/nbd6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.654895 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.654906 W | inventory: skipping device "nbd6". exit status 32 2024-04-23 16:13:24.654913 D | exec: Running command: lsblk /dev/nbd7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.659293 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.659304 W | inventory: skipping device "nbd7". exit status 32 2024-04-23 16:13:24.659311 D | exec: Running command: lsblk /dev/zram0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.667762 D | sys: lsblk output: "SIZE=\"67400368128\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/zram0\" KNAME=\"/dev/zram0\" MOUNTPOINT=\"[SWAP]\" FSTYPE=\"\"" 2024-04-23 16:13:24.667802 D | exec: Running command: sgdisk --print /dev/zram0 2024-04-23 16:13:24.671012 D | exec: Running command: udevadm info --query=property /dev/zram0 2024-04-23 16:13:24.680305 D | sys: udevadm info output: "DEVNAME=/dev/zram0\nDEVPATH=/devices/virtual/block/zram0\nDEVTYPE=disk\nDISKSEQ=15\nMAJOR=253\nMINOR=0\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUDISKS_IGNORE=1\nUSEC_INITIALIZED=23332352" 2024-04-23 16:13:24.680330 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/zram0 2024-04-23 16:13:24.685114 D | exec: Running command: lsblk /dev/dm-0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.693577 D | sys: lsblk output: "SIZE=\"858993459200\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_0\" KNAME=\"/dev/dm-0\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.693606 D | exec: Running command: udevadm info --query=property /dev/dm-0 2024-04-23 16:13:24.702616 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4w6WJF2qM4UEJi87KskymSyRgpJUIHXxm /dev/mapper/h5b_metadata0-h5b_metadata0_0 /dev/h5b_metadata0/h5b_metadata0_0 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_0\nDEVNAME=/dev/dm-0\nDEVPATH=/devices/virtual/block/dm-0\nDEVTYPE=disk\nDISKSEQ=11\nDM_ACTIVATION=1\nDM_LV_NAME=h5b_metadata0_0\nDM_NAME=h5b_metadata0-h5b_metadata0_0\nDM_SUSPENDED=0\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES_VSN=2\nDM_UUID=LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4w6WJF2qM4UEJi87KskymSyRgpJUIHXxm\nDM_VG_NAME=h5b_metadata0\nMAJOR=254\nMINOR=0\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nSYSTEMD_READY=1\nTAGS=:systemd:\nUSEC_INITIALIZED=8488720" 2024-04-23 16:13:24.702657 D | exec: Running command: lsblk /dev/dm-1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.711159 D | sys: lsblk output: "SIZE=\"214748364800\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_1\" KNAME=\"/dev/dm-1\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.711179 D | exec: Running command: udevadm info --query=property /dev/dm-1 2024-04-23 16:13:24.720363 D | sys: udevadm info output: "DEVLINKS=/dev/mapper/h5b_metadata0-h5b_metadata0_1 /dev/h5b_metadata0/h5b_metadata0_1 /dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp40YetwLxJ1txmhzVQSPES2gVjkMBEoKEs /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_1\nDEVNAME=/dev/dm-1\nDEVPATH=/devices/virtual/block/dm-1\nDEVTYPE=disk\nDISKSEQ=12\nDM_ACTIVATION=1\nDM_LV_NAME=h5b_metadata0_1\nDM_NAME=h5b_metadata0-h5b_metadata0_1\nDM_SUSPENDED=0\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES_VSN=2\nDM_UUID=LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp40YetwLxJ1txmhzVQSPES2gVjkMBEoKEs\nDM_VG_NAME=h5b_metadata0\nMAJOR=254\nMINOR=1\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nSYSTEMD_READY=1\nTAGS=:systemd:\nUSEC_INITIALIZED=8489093" 2024-04-23 16:13:24.720383 D | exec: Running command: lsblk /dev/dm-2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.727863 D | sys: lsblk output: "SIZE=\"214748364800\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_2\" KNAME=\"/dev/dm-2\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.727881 D | exec: Running command: udevadm info --query=property /dev/dm-2 2024-04-23 16:13:24.735679 D | sys: udevadm info output: "DEVLINKS=/dev/mapper/h5b_metadata0-h5b_metadata0_2 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_2 /dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4YqvdOa3gukygO11a2cCBhVuy3Hq3hK1W /dev/h5b_metadata0/h5b_metadata0_2\nDEVNAME=/dev/dm-2\nDEVPATH=/devices/virtual/block/dm-2\nDEVTYPE=disk\nDISKSEQ=13\nDM_ACTIVATION=1\nDM_LV_NAME=h5b_metadata0_2\nDM_NAME=h5b_metadata0-h5b_metadata0_2\nDM_SUSPENDED=0\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES_VSN=2\nDM_UUID=LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4YqvdOa3gukygO11a2cCBhVuy3Hq3hK1W\nDM_VG_NAME=h5b_metadata0\nMAJOR=254\nMINOR=2\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nSYSTEMD_READY=1\nTAGS=:systemd:\nUSEC_INITIALIZED=8489648" 2024-04-23 16:13:24.735700 D | exec: Running command: lsblk /dev/dm-3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.743222 D | sys: lsblk output: "SIZE=\"161061273600\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_3\" KNAME=\"/dev/dm-3\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.743240 D | exec: Running command: udevadm info --query=property /dev/dm-3 2024-04-23 16:13:24.751813 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp44LCSn4DQLdlDNv04uednUYjx0oXGnYJg /dev/mapper/h5b_metadata0-h5b_metadata0_3 /dev/h5b_metadata0/h5b_metadata0_3 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_3\nDEVNAME=/dev/dm-3\nDEVPATH=/devices/virtual/block/dm-3\nDEVTYPE=disk\nDISKSEQ=14\nDM_ACTIVATION=1\nDM_LV_NAME=h5b_metadata0_3\nDM_NAME=h5b_metadata0-h5b_metadata0_3\nDM_SUSPENDED=0\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES_VSN=2\nDM_UUID=LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp44LCSn4DQLdlDNv04uednUYjx0oXGnYJg\nDM_VG_NAME=h5b_metadata0\nMAJOR=254\nMINOR=3\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nSYSTEMD_READY=1\nTAGS=:systemd:\nUSEC_INITIALIZED=8490116" 2024-04-23 16:13:24.751864 D | exec: Running command: lsblk /dev/nvme0n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.759709 D | sys: lsblk output: "SIZE=\"512110190592\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nvme0n1\" KNAME=\"/dev/nvme0n1\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.759759 D | exec: Running command: sgdisk --print /dev/nvme0n1 2024-04-23 16:13:24.763243 D | exec: Running command: udevadm info --query=property /dev/nvme0n1 2024-04-23 16:13:24.772970 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-diskseq/1 /dev/disk/by-id/nvme-eui.044a500181401ad2 /dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1 /dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2 /dev/disk/by-path/pci-0000:24:00.0-nvme-1\nDEVNAME=/dev/nvme0n1\nDEVPATH=/devices/pci0000:20/0000:20:03.3/0000:24:00.0/nvme/nvme0/nvme0n1\nDEVTYPE=disk\nDISKSEQ=1\nID_MODEL=UMIS RPETJ512MGE2QDQ\nID_NSID=1\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=412e0852-7ee4-4001-b1b1-beb6d555b2e4\nID_PATH=pci-0000:24:00.0-nvme-1\nID_PATH_TAG=pci-0000_24_00_0-nvme-1\nID_REVISION=1.3Q0630\nID_SERIAL=UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1\nID_SERIAL_SHORT=SS0L25217X1RC18J24R2\nID_WWN=eui.044a500181401ad2\nMAJOR=259\nMINOR=0\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=1062414" 2024-04-23 16:13:24.772990 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nvme0n1 2024-04-23 16:13:24.777644 I | inventory: skipping device "nvme0n1" because it has child, considering the child instead. 2024-04-23 16:13:24.777657 D | exec: Running command: lsblk /dev/nvme0n1p1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.785247 D | sys: lsblk output: "SIZE=\"512108789760\" ROTA=\"0\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/nvme0n1\" NAME=\"/dev/nvme0n1p1\" KNAME=\"/dev/nvme0n1p1\" MOUNTPOINT=\"/rootfs/var\" FSTYPE=\"xfs\"" 2024-04-23 16:13:24.785264 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p1 2024-04-23 16:13:24.794279 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1-part1 /dev/disk/by-uuid/39fa028d-e147-428f-9f9f-6fbf2af76871 /dev/disk/by-partuuid/1fa5d815-9b58-470e-8b8b-754dba77103b /dev/disk/by-path/pci-0000:24:00.0-nvme-1-part1 /dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2-part1 /dev/disk/by-diskseq/1-part1 /dev/disk/by-id/nvme-eui.044a500181401ad2-part1\nDEVNAME=/dev/nvme0n1p1\nDEVPATH=/devices/pci0000:20/0000:20:03.3/0000:24:00.0/nvme/nvme0/nvme0n1/nvme0n1p1\nDEVTYPE=partition\nDISKSEQ=1\nID_FS_BLOCKSIZE=4096\nID_FS_LASTBLOCK=125026560\nID_FS_SIZE=511858737152\nID_FS_TYPE=xfs\nID_FS_USAGE=filesystem\nID_FS_UUID=39fa028d-e147-428f-9f9f-6fbf2af76871\nID_FS_UUID_ENC=39fa028d-e147-428f-9f9f-6fbf2af76871\nID_MODEL=UMIS RPETJ512MGE2QDQ\nID_NSID=1\nID_PART_ENTRY_DISK=259:0\nID_PART_ENTRY_NUMBER=1\nID_PART_ENTRY_OFFSET=2048\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=1000212480\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_UUID=1fa5d815-9b58-470e-8b8b-754dba77103b\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=412e0852-7ee4-4001-b1b1-beb6d555b2e4\nID_PATH=pci-0000:24:00.0-nvme-1\nID_PATH_TAG=pci-0000_24_00_0-nvme-1\nID_REVISION=1.3Q0630\nID_SERIAL=UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1\nID_SERIAL_SHORT=SS0L25217X1RC18J24R2\nID_WWN=eui.044a500181401ad2\nMAJOR=259\nMINOR=1\nPARTN=1\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=1062439" 2024-04-23 16:13:24.794302 D | exec: Running command: lsblk /dev/nvme1n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.801592 D | sys: lsblk output: "SIZE=\"2000398934016\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nvme1n1\" KNAME=\"/dev/nvme1n1\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.801628 D | exec: Running command: sgdisk --print /dev/nvme1n1 2024-04-23 16:13:24.804675 D | exec: Running command: udevadm info --query=property /dev/nvme1n1 2024-04-23 16:13:24.813497 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-diskseq/2 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173_1 /dev/disk/by-id/nvme-eui.32333033363336334ce0001835313630 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173 /dev/disk/by-path/pci-0000:61:00.0-nvme-1\nDEVNAME=/dev/nvme1n1\nDEVPATH=/devices/pci0000:60/0000:60:01.3/0000:61:00.0/nvme/nvme1/nvme1n1\nDEVTYPE=disk\nDISKSEQ=2\nID_MODEL=SPCC M.2 PCIe SSD\nID_NSID=1\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=f6988c1e-f72d-424f-bec7-dc15c1a27cf8\nID_PATH=pci-0000:61:00.0-nvme-1\nID_PATH_TAG=pci-0000_61_00_0-nvme-1\nID_REVISION=VF001C27\nID_SERIAL=SPCC_M.2_PCIe_SSD_230363635160173_1\nID_SERIAL_SHORT=230363635160173\nID_WWN=eui.32333033363336334ce0001835313630\nMAJOR=259\nMINOR=2\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUSEC_INITIALIZED=1269168" 2024-04-23 16:13:24.813516 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nvme1n1 2024-04-23 16:13:24.818570 I | inventory: skipping device "nvme1n1" because it has child, considering the child instead. 2024-04-23 16:13:24.818582 D | exec: Running command: lsblk /dev/nvme1n1p1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.826809 D | sys: lsblk output: "SIZE=\"2000397795328\" ROTA=\"0\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/nvme1n1\" NAME=\"/dev/nvme1n1p1\" KNAME=\"/dev/nvme1n1p1\" MOUNTPOINT=\"\" FSTYPE=\"LVM2_member\"" 2024-04-23 16:13:24.826826 D | exec: Running command: udevadm info --query=property /dev/nvme1n1p1 2024-04-23 16:13:24.835226 D | sys: udevadm info output: "DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-rTkv5n-yCUf-llX4-ndbV-yotq-mcXT-5fy0FT /dev/disk/by-partuuid/9928e468-a288-446a-91f6-da27b610c394 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173-part1 /dev/disk/by-path/pci-0000:61:00.0-nvme-1-part1 /dev/disk/by-id/nvme-eui.32333033363336334ce0001835313630-part1 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173_1-part1 /dev/disk/by-diskseq/2-part1\nDEVNAME=/dev/nvme1n1p1\nDEVPATH=/devices/pci0000:60/0000:60:01.3/0000:61:00.0/nvme/nvme1/nvme1n1/nvme1n1p1\nDEVTYPE=partition\nDISKSEQ=2\nID_FS_TYPE=LVM2_member\nID_FS_USAGE=raid\nID_FS_UUID=rTkv5n-yCUf-llX4-ndbV-yotq-mcXT-5fy0FT\nID_FS_UUID_ENC=rTkv5n-yCUf-llX4-ndbV-yotq-mcXT-5fy0FT\nID_FS_VERSION=LVM2 001\nID_MODEL=SPCC M.2 PCIe SSD\nID_NSID=1\nID_PART_ENTRY_DISK=259:2\nID_PART_ENTRY_NUMBER=1\nID_PART_ENTRY_OFFSET=2048\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_SIZE=3907026944\nID_PART_ENTRY_TYPE=e6d6d379-f507-44c2-a23c-238f2a3df928\nID_PART_ENTRY_UUID=9928e468-a288-446a-91f6-da27b610c394\nID_PART_TABLE_TYPE=gpt\nID_PART_TABLE_UUID=f6988c1e-f72d-424f-bec7-dc15c1a27cf8\nID_PATH=pci-0000:61:00.0-nvme-1\nID_PATH_TAG=pci-0000_61_00_0-nvme-1\nID_REVISION=VF001C27\nID_SERIAL=SPCC_M.2_PCIe_SSD_230363635160173_1\nID_SERIAL_SHORT=230363635160173\nID_WWN=eui.32333033363336334ce0001835313630\nMAJOR=259\nMINOR=3\nPARTN=1\nPATH=/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/bin:/nix/store/w4vvn9xcfbxg9viy3v5fg2ccx14i54q5-udev-path/sbin\nSUBSYSTEM=block\nTAGS=:systemd:\nUDISKS_IGNORE=1\nUSEC_INITIALIZED=1269193" 2024-04-23 16:13:24.835246 D | exec: Running command: lsblk /dev/nbd8 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.839076 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.839088 W | inventory: skipping device "nbd8". exit status 32 2024-04-23 16:13:24.839095 D | exec: Running command: lsblk /dev/nbd9 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.842937 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.842946 W | inventory: skipping device "nbd9". exit status 32 2024-04-23 16:13:24.842952 D | exec: Running command: lsblk /dev/nbd10 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.846726 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.846738 W | inventory: skipping device "nbd10". exit status 32 2024-04-23 16:13:24.846745 D | exec: Running command: lsblk /dev/nbd11 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.850700 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.850710 W | inventory: skipping device "nbd11". exit status 32 2024-04-23 16:13:24.850716 D | exec: Running command: lsblk /dev/nbd12 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.854568 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.854578 W | inventory: skipping device "nbd12". exit status 32 2024-04-23 16:13:24.854588 D | exec: Running command: lsblk /dev/nbd13 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.858258 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.858269 W | inventory: skipping device "nbd13". exit status 32 2024-04-23 16:13:24.858275 D | exec: Running command: lsblk /dev/nbd14 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.861964 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.861978 W | inventory: skipping device "nbd14". exit status 32 2024-04-23 16:13:24.861985 D | exec: Running command: lsblk /dev/nbd15 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.865884 E | sys: failed to execute lsblk. output: . 2024-04-23 16:13:24.865896 W | inventory: skipping device "nbd15". exit status 32 2024-04-23 16:13:24.865900 D | inventory: discovered disks are: 2024-04-23 16:13:24.865941 D | inventory: &{Name:sda Parent: HasChildren:false DevLinks:/dev/disk/by-id/wwn-0x50014ee20fcbafe2 /dev/disk/by-diskseq/3 /dev/disk/by-path/pci-0000:63:00.2-ata-4.0 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 /dev/disk/by-path/pci-0000:63:00.2-ata-4 Size:4000787030016 UUID:d3083bb9-6fca-44d0-8c99-aced76d37de7 Serial:WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD40EFRX-68N32N0 WWN:0x50014ee20fcbafe2 WWNVendorExtension:0x50014ee20fcbafe2 Empty:false CephVolumeData: RealPath:/dev/sda KernelName:sda Encrypted:false} 2024-04-23 16:13:24.865955 D | inventory: &{Name:sdb Parent: HasChildren:false DevLinks:/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG /dev/disk/by-path/pci-0000:63:00.2-ata-2.0 /dev/disk/by-diskseq/4 /dev/disk/by-path/pci-0000:63:00.2-ata-2 /dev/disk/by-id/wwn-0x5000039af8cb5ed8 Size:16000900661248 UUID:ced9e595-b31d-4d5a-8984-6a9ed23d9f29 Serial:TOSHIBA_MG08ACA16TE_6180A1PCFVGG Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:TOSHIBA_MG08ACA16TE WWN:0x5000039af8cb5ed8 WWNVendorExtension:0x5000039af8cb5ed8 Empty:false CephVolumeData: RealPath:/dev/sdb KernelName:sdb Encrypted:false} 2024-04-23 16:13:24.865967 D | inventory: &{Name:sdc Parent: HasChildren:false DevLinks:/dev/disk/by-id/wwn-0x50014ee2122965ae /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN /dev/disk/by-path/pci-0000:63:00.2-ata-5.0 /dev/disk/by-diskseq/5 /dev/disk/by-path/pci-0000:63:00.2-ata-5 Size:4000787030016 UUID:f0f133a2-b42c-4cbd-9296-544de8033409 Serial:WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD40EFRX-68N32N0 WWN:0x50014ee2122965ae WWNVendorExtension:0x50014ee2122965ae Empty:false CephVolumeData: RealPath:/dev/sdc KernelName:sdc Encrypted:false} 2024-04-23 16:13:24.865981 D | inventory: &{Name:sdd Parent: HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:63:00.2-ata-7.0 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY /dev/disk/by-path/pci-0000:63:00.2-ata-7 /dev/disk/by-diskseq/6 /dev/disk/by-id/wwn-0x50014ee2630d07fd Size:3000592982016 UUID:5bbc7959-5487-47f7-8bb8-35953aa96995 Serial:WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD30EFRX-68EUZN0 WWN:0x50014ee2630d07fd WWNVendorExtension:0x50014ee2630d07fd Empty:false CephVolumeData: RealPath:/dev/sdd KernelName:sdd Encrypted:false} 2024-04-23 16:13:24.865993 D | inventory: &{Name:sde Parent: HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:63:00.2-ata-8.0 /dev/disk/by-diskseq/7 /dev/disk/by-path/pci-0000:63:00.2-ata-8 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF /dev/disk/by-id/wwn-0x50014ee20d320712 Size:3000592982016 UUID:705776b1-db0f-4d2e-9f42-ea6fa1ce4740 Serial:WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD30EFRX-68EUZN0 WWN:0x50014ee20d320712 WWNVendorExtension:0x50014ee20d320712 Empty:false CephVolumeData: RealPath:/dev/sde KernelName:sde Encrypted:false} 2024-04-23 16:13:24.866013 D | inventory: &{Name:sdf1 Parent:sdf HasChildren:false DevLinks:/dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0-part1 /dev/disk/by-label/ESP /dev/disk/by-partuuid/1c06f03b-704e-4657-b9cd-681a087a2fdc /dev/disk/by-path/pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0-part1 /dev/disk/by-diskseq/10-part1 /dev/disk/by-uuid/12CE-A600 /dev/disk/by-path/pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0-part1 /dev/disk/by-partlabel/ESP Size:247463936 UUID: Serial:USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:vfat Mountpoint:boot Vendor:USB Model:SanDisk_3.2Gen1 WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sdf1 KernelName:sdf1 Encrypted:false} 2024-04-23 16:13:24.866032 D | inventory: &{Name:sdf2 Parent:sdf HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0-part2 /dev/disk/by-label/nixos /dev/disk/by-diskseq/10-part2 /dev/disk/by-partlabel/primary /dev/disk/by-partuuid/f222513b-ded1-49fa-b591-20ce86a2fe7f /dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0-part2 /dev/disk/by-path/pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0-part2 /dev/disk/by-uuid/f222513b-ded1-49fa-b591-20ce86a2fe7f /dev/root Size:61274570240 UUID: Serial:USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:ext4 Mountpoint:rootfs Vendor:USB Model:SanDisk_3.2Gen1 WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sdf2 KernelName:sdf2 Encrypted:false} 2024-04-23 16:13:24.866045 D | inventory: &{Name:zram0 Parent: HasChildren:false DevLinks: Size:67400368128 UUID:4e19381a-af44-442b-a593-8a629fc5a6b7 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint:[SWAP] Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/zram0 KernelName:zram0 Encrypted:false} 2024-04-23 16:13:24.866067 D | inventory: &{Name:dm-0 Parent: HasChildren:false DevLinks:/dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4w6WJF2qM4UEJi87KskymSyRgpJUIHXxm /dev/mapper/h5b_metadata0-h5b_metadata0_0 /dev/h5b_metadata0/h5b_metadata0_0 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_0 Size:858993459200 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_0 KernelName:dm-0 Encrypted:false} 2024-04-23 16:13:24.866079 D | inventory: &{Name:dm-1 Parent: HasChildren:false DevLinks:/dev/mapper/h5b_metadata0-h5b_metadata0_1 /dev/h5b_metadata0/h5b_metadata0_1 /dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp40YetwLxJ1txmhzVQSPES2gVjkMBEoKEs /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_1 Size:214748364800 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_1 KernelName:dm-1 Encrypted:false} 2024-04-23 16:13:24.866090 D | inventory: &{Name:dm-2 Parent: HasChildren:false DevLinks:/dev/mapper/h5b_metadata0-h5b_metadata0_2 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_2 /dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4YqvdOa3gukygO11a2cCBhVuy3Hq3hK1W /dev/h5b_metadata0/h5b_metadata0_2 Size:214748364800 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_2 KernelName:dm-2 Encrypted:false} 2024-04-23 16:13:24.866102 D | inventory: &{Name:dm-3 Parent: HasChildren:false DevLinks:/dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp44LCSn4DQLdlDNv04uednUYjx0oXGnYJg /dev/mapper/h5b_metadata0-h5b_metadata0_3 /dev/h5b_metadata0/h5b_metadata0_3 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_3 Size:161061273600 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_3 KernelName:dm-3 Encrypted:false} 2024-04-23 16:13:24.866115 D | inventory: &{Name:nvme0n1p1 Parent:nvme0n1 HasChildren:false DevLinks:/dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1-part1 /dev/disk/by-uuid/39fa028d-e147-428f-9f9f-6fbf2af76871 /dev/disk/by-partuuid/1fa5d815-9b58-470e-8b8b-754dba77103b /dev/disk/by-path/pci-0000:24:00.0-nvme-1-part1 /dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2-part1 /dev/disk/by-diskseq/1-part1 /dev/disk/by-id/nvme-eui.044a500181401ad2-part1 Size:512108789760 UUID: Serial:UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:xfs Mountpoint:var Vendor: Model:UMIS RPETJ512MGE2QDQ WWN:eui.044a500181401ad2 WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1p1 KernelName:nvme0n1p1 Encrypted:false} 2024-04-23 16:13:24.866132 D | inventory: &{Name:nvme1n1p1 Parent:nvme1n1 HasChildren:false DevLinks:/dev/disk/by-id/lvm-pv-uuid-rTkv5n-yCUf-llX4-ndbV-yotq-mcXT-5fy0FT /dev/disk/by-partuuid/9928e468-a288-446a-91f6-da27b610c394 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173-part1 /dev/disk/by-path/pci-0000:61:00.0-nvme-1-part1 /dev/disk/by-id/nvme-eui.32333033363336334ce0001835313630-part1 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173_1-part1 /dev/disk/by-diskseq/2-part1 Size:2000397795328 UUID: Serial:SPCC_M.2_PCIe_SSD_230363635160173_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:LVM2_member Mountpoint: Vendor: Model:SPCC M.2 PCIe SSD WWN:eui.32333033363336334ce0001835313630 WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme1n1p1 KernelName:nvme1n1p1 Encrypted:false} 2024-04-23 16:13:24.866138 I | cephosd: creating and starting the osds 2024-04-23 16:13:24.866165 D | cephosd: desiredDevices are [{Name:/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_3 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_2 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_1 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false} {Name:/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG OSDsPerDevice:1 MetadataDevice:/dev/h5b_metadata0/h5b_metadata0_0 DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false}] 2024-04-23 16:13:24.866171 D | cephosd: context.Devices are: 2024-04-23 16:13:24.866183 D | cephosd: &{Name:sda Parent: HasChildren:false DevLinks:/dev/disk/by-id/wwn-0x50014ee20fcbafe2 /dev/disk/by-diskseq/3 /dev/disk/by-path/pci-0000:63:00.2-ata-4.0 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 /dev/disk/by-path/pci-0000:63:00.2-ata-4 Size:4000787030016 UUID:d3083bb9-6fca-44d0-8c99-aced76d37de7 Serial:WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD40EFRX-68N32N0 WWN:0x50014ee20fcbafe2 WWNVendorExtension:0x50014ee20fcbafe2 Empty:false CephVolumeData: RealPath:/dev/sda KernelName:sda Encrypted:false} 2024-04-23 16:13:24.866202 D | cephosd: &{Name:sdb Parent: HasChildren:false DevLinks:/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG /dev/disk/by-path/pci-0000:63:00.2-ata-2.0 /dev/disk/by-diskseq/4 /dev/disk/by-path/pci-0000:63:00.2-ata-2 /dev/disk/by-id/wwn-0x5000039af8cb5ed8 Size:16000900661248 UUID:ced9e595-b31d-4d5a-8984-6a9ed23d9f29 Serial:TOSHIBA_MG08ACA16TE_6180A1PCFVGG Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:TOSHIBA_MG08ACA16TE WWN:0x5000039af8cb5ed8 WWNVendorExtension:0x5000039af8cb5ed8 Empty:false CephVolumeData: RealPath:/dev/sdb KernelName:sdb Encrypted:false} 2024-04-23 16:13:24.866218 D | cephosd: &{Name:sdc Parent: HasChildren:false DevLinks:/dev/disk/by-id/wwn-0x50014ee2122965ae /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN /dev/disk/by-path/pci-0000:63:00.2-ata-5.0 /dev/disk/by-diskseq/5 /dev/disk/by-path/pci-0000:63:00.2-ata-5 Size:4000787030016 UUID:f0f133a2-b42c-4cbd-9296-544de8033409 Serial:WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD40EFRX-68N32N0 WWN:0x50014ee2122965ae WWNVendorExtension:0x50014ee2122965ae Empty:false CephVolumeData: RealPath:/dev/sdc KernelName:sdc Encrypted:false} 2024-04-23 16:13:24.866240 D | cephosd: &{Name:sdd Parent: HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:63:00.2-ata-7.0 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY /dev/disk/by-path/pci-0000:63:00.2-ata-7 /dev/disk/by-diskseq/6 /dev/disk/by-id/wwn-0x50014ee2630d07fd Size:3000592982016 UUID:5bbc7959-5487-47f7-8bb8-35953aa96995 Serial:WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD30EFRX-68EUZN0 WWN:0x50014ee2630d07fd WWNVendorExtension:0x50014ee2630d07fd Empty:false CephVolumeData: RealPath:/dev/sdd KernelName:sdd Encrypted:false} 2024-04-23 16:13:24.866254 D | cephosd: &{Name:sde Parent: HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:63:00.2-ata-8.0 /dev/disk/by-diskseq/7 /dev/disk/by-path/pci-0000:63:00.2-ata-8 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF /dev/disk/by-id/wwn-0x50014ee20d320712 Size:3000592982016 UUID:705776b1-db0f-4d2e-9f42-ea6fa1ce4740 Serial:WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model:WDC_WD30EFRX-68EUZN0 WWN:0x50014ee20d320712 WWNVendorExtension:0x50014ee20d320712 Empty:false CephVolumeData: RealPath:/dev/sde KernelName:sde Encrypted:false} 2024-04-23 16:13:24.866268 D | cephosd: &{Name:sdf1 Parent:sdf HasChildren:false DevLinks:/dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0-part1 /dev/disk/by-label/ESP /dev/disk/by-partuuid/1c06f03b-704e-4657-b9cd-681a087a2fdc /dev/disk/by-path/pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0-part1 /dev/disk/by-diskseq/10-part1 /dev/disk/by-uuid/12CE-A600 /dev/disk/by-path/pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0-part1 /dev/disk/by-partlabel/ESP Size:247463936 UUID: Serial:USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:vfat Mountpoint:boot Vendor:USB Model:SanDisk_3.2Gen1 WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sdf1 KernelName:sdf1 Encrypted:false} 2024-04-23 16:13:24.866299 D | cephosd: &{Name:sdf2 Parent:sdf HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:06:00.3-usbv3-0:1:1.0-scsi-0:0:0:0-part2 /dev/disk/by-label/nixos /dev/disk/by-diskseq/10-part2 /dev/disk/by-partlabel/primary /dev/disk/by-partuuid/f222513b-ded1-49fa-b591-20ce86a2fe7f /dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0-part2 /dev/disk/by-path/pci-0000:06:00.3-usb-0:1:1.0-scsi-0:0:0:0-part2 /dev/disk/by-uuid/f222513b-ded1-49fa-b591-20ce86a2fe7f /dev/root Size:61274570240 UUID: Serial:USB_SanDisk_3.2Gen1_0501e6ce261ba99af3886f2ae416324036d1383be2568d6eba5eecb2cdee08bbc58b00000000000000000000784f1ef4ff92051083558107bf2b6c0a-0:0 Type:part Rotational:true Readonly:false Partitions:[] Filesystem:ext4 Mountpoint:rootfs Vendor:USB Model:SanDisk_3.2Gen1 WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/sdf2 KernelName:sdf2 Encrypted:false} 2024-04-23 16:13:24.866315 D | cephosd: &{Name:zram0 Parent: HasChildren:false DevLinks: Size:67400368128 UUID:4e19381a-af44-442b-a593-8a629fc5a6b7 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint:[SWAP] Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/zram0 KernelName:zram0 Encrypted:false} 2024-04-23 16:13:24.866328 D | cephosd: &{Name:dm-0 Parent: HasChildren:false DevLinks:/dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4w6WJF2qM4UEJi87KskymSyRgpJUIHXxm /dev/mapper/h5b_metadata0-h5b_metadata0_0 /dev/h5b_metadata0/h5b_metadata0_0 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_0 Size:858993459200 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_0 KernelName:dm-0 Encrypted:false} 2024-04-23 16:13:24.866352 D | cephosd: &{Name:dm-1 Parent: HasChildren:false DevLinks:/dev/mapper/h5b_metadata0-h5b_metadata0_1 /dev/h5b_metadata0/h5b_metadata0_1 /dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp40YetwLxJ1txmhzVQSPES2gVjkMBEoKEs /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_1 Size:214748364800 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_1 KernelName:dm-1 Encrypted:false} 2024-04-23 16:13:24.866364 D | cephosd: &{Name:dm-2 Parent: HasChildren:false DevLinks:/dev/mapper/h5b_metadata0-h5b_metadata0_2 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_2 /dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp4YqvdOa3gukygO11a2cCBhVuy3Hq3hK1W /dev/h5b_metadata0/h5b_metadata0_2 Size:214748364800 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_2 KernelName:dm-2 Encrypted:false} 2024-04-23 16:13:24.866385 D | cephosd: &{Name:dm-3 Parent: HasChildren:false DevLinks:/dev/disk/by-id/dm-uuid-LVM-R6kx0wuAyf3blpqB6HLa25EX8eFmwzp44LCSn4DQLdlDNv04uednUYjx0oXGnYJg /dev/mapper/h5b_metadata0-h5b_metadata0_3 /dev/h5b_metadata0/h5b_metadata0_3 /dev/disk/by-id/dm-name-h5b_metadata0-h5b_metadata0_3 Size:161061273600 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/h5b_metadata0-h5b_metadata0_3 KernelName:dm-3 Encrypted:false} 2024-04-23 16:13:24.866397 D | cephosd: &{Name:nvme0n1p1 Parent:nvme0n1 HasChildren:false DevLinks:/dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1-part1 /dev/disk/by-uuid/39fa028d-e147-428f-9f9f-6fbf2af76871 /dev/disk/by-partuuid/1fa5d815-9b58-470e-8b8b-754dba77103b /dev/disk/by-path/pci-0000:24:00.0-nvme-1-part1 /dev/disk/by-id/nvme-UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2-part1 /dev/disk/by-diskseq/1-part1 /dev/disk/by-id/nvme-eui.044a500181401ad2-part1 Size:512108789760 UUID: Serial:UMIS_RPETJ512MGE2QDQ_SS0L25217X1RC18J24R2_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:xfs Mountpoint:var Vendor: Model:UMIS RPETJ512MGE2QDQ WWN:eui.044a500181401ad2 WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1p1 KernelName:nvme0n1p1 Encrypted:false} 2024-04-23 16:13:24.866414 D | cephosd: &{Name:nvme1n1p1 Parent:nvme1n1 HasChildren:false DevLinks:/dev/disk/by-id/lvm-pv-uuid-rTkv5n-yCUf-llX4-ndbV-yotq-mcXT-5fy0FT /dev/disk/by-partuuid/9928e468-a288-446a-91f6-da27b610c394 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173-part1 /dev/disk/by-path/pci-0000:61:00.0-nvme-1-part1 /dev/disk/by-id/nvme-eui.32333033363336334ce0001835313630-part1 /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_230363635160173_1-part1 /dev/disk/by-diskseq/2-part1 Size:2000397795328 UUID: Serial:SPCC_M.2_PCIe_SSD_230363635160173_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:LVM2_member Mountpoint: Vendor: Model:SPCC M.2 PCIe SSD WWN:eui.32333033363336334ce0001835313630 WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme1n1p1 KernelName:nvme1n1p1 Encrypted:false} 2024-04-23 16:13:24.866422 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:24.866649 D | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:24.874263 D | sys: lsblk output: "SIZE=\"4000787030016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sda\" KNAME=\"/dev/sda\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:24.874284 D | exec: Running command: ceph-volume inventory --format json /dev/sda 2024-04-23 16:13:25.390660 I | cephosd: device "sda" is available. 2024-04-23 16:13:25.390687 I | cephosd: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1" found in the desired devices (matched by link: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1") 2024-04-23 16:13:25.390693 I | cephosd: device "sda" is selected by the device filter/name "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1" 2024-04-23 16:13:25.390700 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:25.390942 D | exec: Running command: lsblk /dev/sdb --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:25.398742 D | sys: lsblk output: "SIZE=\"16000900661248\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdb\" KNAME=\"/dev/sdb\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:25.398760 D | exec: Running command: ceph-volume inventory --format json /dev/sdb 2024-04-23 16:13:25.907257 I | cephosd: device "sdb" is available. 2024-04-23 16:13:25.907287 I | cephosd: "/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG" found in the desired devices (matched by link: "/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG") 2024-04-23 16:13:25.907294 I | cephosd: device "sdb" is selected by the device filter/name "/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG" 2024-04-23 16:13:25.907301 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:26.492818 D | exec: Running command: lsblk /dev/sdc --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:26.500816 D | sys: lsblk output: "SIZE=\"4000787030016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdc\" KNAME=\"/dev/sdc\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:26.500838 D | exec: Running command: ceph-volume inventory --format json /dev/sdc 2024-04-23 16:13:27.015473 I | cephosd: device "sdc" is available. 2024-04-23 16:13:27.015502 I | cephosd: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN" found in the desired devices (matched by link: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN") 2024-04-23 16:13:27.015509 I | cephosd: device "sdc" is selected by the device filter/name "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN" 2024-04-23 16:13:27.015518 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:27.015767 D | exec: Running command: lsblk /dev/sdd --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:27.023647 D | sys: lsblk output: "SIZE=\"3000592982016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdd\" KNAME=\"/dev/sdd\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:27.023664 D | exec: Running command: ceph-volume inventory --format json /dev/sdd 2024-04-23 16:13:27.502039 I | cephosd: device "sdd" is available. 2024-04-23 16:13:27.502082 I | cephosd: skipping device "sdd" that does not match the device filter/list ([{/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF 1 /dev/h5b_metadata0/h5b_metadata0_3 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 1 /dev/h5b_metadata0/h5b_metadata0_2 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN 1 /dev/h5b_metadata0/h5b_metadata0_1 0 false false} {/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG 1 /dev/h5b_metadata0/h5b_metadata0_0 0 false false}]). 2024-04-23 16:13:27.502088 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:27.502320 D | exec: Running command: lsblk /dev/sde --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:27.510349 D | sys: lsblk output: "SIZE=\"3000592982016\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sde\" KNAME=\"/dev/sde\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:27.510367 D | exec: Running command: ceph-volume inventory --format json /dev/sde 2024-04-23 16:13:28.001801 I | cephosd: device "sde" is available. 2024-04-23 16:13:28.001827 I | cephosd: "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF" found in the desired devices (matched by link: "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF") 2024-04-23 16:13:28.001834 I | cephosd: device "sde" is selected by the device filter/name "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF" 2024-04-23 16:13:28.001842 I | cephosd: skipping device "sdf1" with mountpoint "boot" 2024-04-23 16:13:28.001847 I | cephosd: skipping device "sdf2" with mountpoint "rootfs" 2024-04-23 16:13:28.001851 I | cephosd: skipping device "zram0" with mountpoint "[SWAP]" 2024-04-23 16:13:28.001856 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:28.048269 D | exec: Running command: lsblk /dev/mapper/h5b_metadata0-h5b_metadata0_0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:28.055926 D | sys: lsblk output: "SIZE=\"858993459200\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_0\" KNAME=\"/dev/dm-0\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:28.055943 D | exec: Running command: dmsetup info -c --noheadings -o name /dev/mapper/h5b_metadata0-h5b_metadata0_0 2024-04-23 16:13:28.060845 D | exec: Running command: dmsetup splitname --noheadings h5b_metadata0-h5b_metadata0_0 2024-04-23 16:13:28.065152 D | exec: Running command: ceph-volume lvm list --format json h5b_metadata0/h5b_metadata0_0 2024-04-23 16:13:28.455451 I | cephosd: device "dm-0" is available. 2024-04-23 16:13:28.455494 I | cephosd: skipping device "dm-0" that does not match the device filter/list ([{/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF 1 /dev/h5b_metadata0/h5b_metadata0_3 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 1 /dev/h5b_metadata0/h5b_metadata0_2 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN 1 /dev/h5b_metadata0/h5b_metadata0_1 0 false false} {/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG 1 /dev/h5b_metadata0/h5b_metadata0_0 0 false false}]). 2024-04-23 16:13:28.455500 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:28.467761 D | exec: Running command: lsblk /dev/mapper/h5b_metadata0-h5b_metadata0_1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:28.475820 D | sys: lsblk output: "SIZE=\"214748364800\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_1\" KNAME=\"/dev/dm-1\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:28.475839 D | exec: Running command: dmsetup info -c --noheadings -o name /dev/mapper/h5b_metadata0-h5b_metadata0_1 2024-04-23 16:13:28.480639 D | exec: Running command: dmsetup splitname --noheadings h5b_metadata0-h5b_metadata0_1 2024-04-23 16:13:28.484844 D | exec: Running command: ceph-volume lvm list --format json h5b_metadata0/h5b_metadata0_1 2024-04-23 16:13:28.869905 I | cephosd: device "dm-1" is available. 2024-04-23 16:13:28.869956 I | cephosd: skipping device "dm-1" that does not match the device filter/list ([{/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF 1 /dev/h5b_metadata0/h5b_metadata0_3 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 1 /dev/h5b_metadata0/h5b_metadata0_2 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN 1 /dev/h5b_metadata0/h5b_metadata0_1 0 false false} {/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG 1 /dev/h5b_metadata0/h5b_metadata0_0 0 false false}]). 2024-04-23 16:13:28.869962 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:28.882257 D | exec: Running command: lsblk /dev/mapper/h5b_metadata0-h5b_metadata0_2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:28.890225 D | sys: lsblk output: "SIZE=\"214748364800\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_2\" KNAME=\"/dev/dm-2\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:28.890243 D | exec: Running command: dmsetup info -c --noheadings -o name /dev/mapper/h5b_metadata0-h5b_metadata0_2 2024-04-23 16:13:28.894642 D | exec: Running command: dmsetup splitname --noheadings h5b_metadata0-h5b_metadata0_2 2024-04-23 16:13:28.898711 D | exec: Running command: ceph-volume lvm list --format json h5b_metadata0/h5b_metadata0_2 2024-04-23 16:13:29.279069 I | cephosd: device "dm-2" is available. 2024-04-23 16:13:29.279118 I | cephosd: skipping device "dm-2" that does not match the device filter/list ([{/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF 1 /dev/h5b_metadata0/h5b_metadata0_3 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 1 /dev/h5b_metadata0/h5b_metadata0_2 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN 1 /dev/h5b_metadata0/h5b_metadata0_1 0 false false} {/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG 1 /dev/h5b_metadata0/h5b_metadata0_0 0 false false}]). 2024-04-23 16:13:29.279126 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here 2024-04-23 16:13:29.291532 D | exec: Running command: lsblk /dev/mapper/h5b_metadata0-h5b_metadata0_3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE 2024-04-23 16:13:29.299464 D | sys: lsblk output: "SIZE=\"161061273600\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/h5b_metadata0-h5b_metadata0_3\" KNAME=\"/dev/dm-3\" MOUNTPOINT=\"\" FSTYPE=\"\"" 2024-04-23 16:13:29.299482 D | exec: Running command: dmsetup info -c --noheadings -o name /dev/mapper/h5b_metadata0-h5b_metadata0_3 2024-04-23 16:13:29.304710 D | exec: Running command: dmsetup splitname --noheadings h5b_metadata0-h5b_metadata0_3 2024-04-23 16:13:29.308941 D | exec: Running command: ceph-volume lvm list --format json h5b_metadata0/h5b_metadata0_3 2024-04-23 16:13:29.690545 I | cephosd: device "dm-3" is available. 2024-04-23 16:13:29.690593 I | cephosd: skipping device "dm-3" that does not match the device filter/list ([{/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF 1 /dev/h5b_metadata0/h5b_metadata0_3 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 1 /dev/h5b_metadata0/h5b_metadata0_2 0 false false} {/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN 1 /dev/h5b_metadata0/h5b_metadata0_1 0 false false} {/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG 1 /dev/h5b_metadata0/h5b_metadata0_0 0 false false}]). 2024-04-23 16:13:29.690599 I | cephosd: skipping device "nvme0n1p1" with mountpoint "var" 2024-04-23 16:13:29.690604 I | cephosd: skipping device "nvme1n1p1" because it contains a filesystem "LVM2_member" 2024-04-23 16:13:29.695209 I | cephosd: configuring osd devices: {"Entries":{"sda":{"Data":-1,"Metadata":null,"Config":{"Name":"/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1","OSDsPerDevice":1,"MetadataDevice":"/dev/h5b_metadata0/h5b_metadata0_2","DatabaseSizeMB":0,"DeviceClass":"hdd","InitialWeight":"","IsFilter":false,"IsDevicePathFilter":false},"PersistentDevicePaths":["/dev/disk/by-id/wwn-0x50014ee20fcbafe2","/dev/disk/by-diskseq/3","/dev/disk/by-path/pci-0000:63:00.2-ata-4.0","/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1","/dev/disk/by-path/pci-0000:63:00.2-ata-4"],"DeviceInfo":{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/wwn-0x50014ee20fcbafe2 /dev/disk/by-diskseq/3 /dev/disk/by-path/pci-0000:63:00.2-ata-4.0 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1 /dev/disk/by-path/pci-0000:63:00.2-ata-4","size":4000787030016,"uuid":"d3083bb9-6fca-44d0-8c99-aced76d37de7","serial":"WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","mountpoint":"","vendor":"","model":"WDC_WD40EFRX-68N32N0","wwn":"0x50014ee20fcbafe2","wwnVendorExtension":"0x50014ee20fcbafe2","empty":false,"real-path":"/dev/sda","kernel-name":"sda"},"RestoreOSD":false},"sdb":{"Data":-1,"Metadata":null,"Config":{"Name":"/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG","OSDsPerDevice":1,"MetadataDevice":"/dev/h5b_metadata0/h5b_metadata0_0","DatabaseSizeMB":0,"DeviceClass":"hdd","InitialWeight":"","IsFilter":false,"IsDevicePathFilter":false},"PersistentDevicePaths":["/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG","/dev/disk/by-path/pci-0000:63:00.2-ata-2.0","/dev/disk/by-diskseq/4","/dev/disk/by-path/pci-0000:63:00.2-ata-2","/dev/disk/by-id/wwn-0x5000039af8cb5ed8"],"DeviceInfo":{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG /dev/disk/by-path/pci-0000:63:00.2-ata-2.0 /dev/disk/by-diskseq/4 /dev/disk/by-path/pci-0000:63:00.2-ata-2 /dev/disk/by-id/wwn-0x5000039af8cb5ed8","size":16000900661248,"uuid":"ced9e595-b31d-4d5a-8984-6a9ed23d9f29","serial":"TOSHIBA_MG08ACA16TE_6180A1PCFVGG","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","mountpoint":"","vendor":"","model":"TOSHIBA_MG08ACA16TE","wwn":"0x5000039af8cb5ed8","wwnVendorExtension":"0x5000039af8cb5ed8","empty":false,"real-path":"/dev/sdb","kernel-name":"sdb"},"RestoreOSD":false},"sdc":{"Data":-1,"Metadata":null,"Config":{"Name":"/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN","OSDsPerDevice":1,"MetadataDevice":"/dev/h5b_metadata0/h5b_metadata0_1","DatabaseSizeMB":0,"DeviceClass":"hdd","InitialWeight":"","IsFilter":false,"IsDevicePathFilter":false},"PersistentDevicePaths":["/dev/disk/by-id/wwn-0x50014ee2122965ae","/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN","/dev/disk/by-path/pci-0000:63:00.2-ata-5.0","/dev/disk/by-diskseq/5","/dev/disk/by-path/pci-0000:63:00.2-ata-5"],"DeviceInfo":{"name":"sdc","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/wwn-0x50014ee2122965ae /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN /dev/disk/by-path/pci-0000:63:00.2-ata-5.0 /dev/disk/by-diskseq/5 /dev/disk/by-path/pci-0000:63:00.2-ata-5","size":4000787030016,"uuid":"f0f133a2-b42c-4cbd-9296-544de8033409","serial":"WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","mountpoint":"","vendor":"","model":"WDC_WD40EFRX-68N32N0","wwn":"0x50014ee2122965ae","wwnVendorExtension":"0x50014ee2122965ae","empty":false,"real-path":"/dev/sdc","kernel-name":"sdc"},"RestoreOSD":false},"sde":{"Data":-1,"Metadata":null,"Config":{"Name":"/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF","OSDsPerDevice":1,"MetadataDevice":"/dev/h5b_metadata0/h5b_metadata0_3","DatabaseSizeMB":0,"DeviceClass":"hdd","InitialWeight":"","IsFilter":false,"IsDevicePathFilter":false},"PersistentDevicePaths":["/dev/disk/by-path/pci-0000:63:00.2-ata-8.0","/dev/disk/by-diskseq/7","/dev/disk/by-path/pci-0000:63:00.2-ata-8","/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF","/dev/disk/by-id/wwn-0x50014ee20d320712"],"DeviceInfo":{"name":"sde","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:63:00.2-ata-8.0 /dev/disk/by-diskseq/7 /dev/disk/by-path/pci-0000:63:00.2-ata-8 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF /dev/disk/by-id/wwn-0x50014ee20d320712","size":3000592982016,"uuid":"705776b1-db0f-4d2e-9f42-ea6fa1ce4740","serial":"WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","mountpoint":"","vendor":"","model":"WDC_WD30EFRX-68EUZN0","wwn":"0x50014ee20d320712","wwnVendorExtension":"0x50014ee20d320712","empty":false,"real-path":"/dev/sde","kernel-name":"sde"},"RestoreOSD":false}}} 2024-04-23 16:13:29.695271 I | cephclient: getting or creating ceph auth key "client.bootstrap-osd" 2024-04-23 16:13:29.695285 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2024-04-23 16:13:30.258480 D | cephosd: won't use raw mode for disk "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1" since this disk has a metadata device 2024-04-23 16:13:30.258498 D | cephosd: won't use raw mode for disk "/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG" since this disk has a metadata device 2024-04-23 16:13:30.258503 D | cephosd: won't use raw mode for disk "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN" since this disk has a metadata device 2024-04-23 16:13:30.258508 D | cephosd: won't use raw mode for disk "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF" since this disk has a metadata device 2024-04-23 16:13:30.258639 I | cephosd: configuring new LVM device sda 2024-04-23 16:13:30.258661 I | cephosd: "/dev/h5b_metadata0/h5b_metadata0_2" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_2") 2024-04-23 16:13:30.258667 I | cephosd: using /dev/h5b_metadata0/h5b_metadata0_2 as metadataDevice for device /dev/sda and let ceph-volume lvm batch decide how to create volumes 2024-04-23 16:13:30.258672 I | cephosd: configuring new LVM device sdb 2024-04-23 16:13:30.258681 I | cephosd: "/dev/h5b_metadata0/h5b_metadata0_0" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_0") 2024-04-23 16:13:30.258687 I | cephosd: using /dev/h5b_metadata0/h5b_metadata0_0 as metadataDevice for device /dev/sdb and let ceph-volume lvm batch decide how to create volumes 2024-04-23 16:13:30.258691 I | cephosd: configuring new LVM device sdc 2024-04-23 16:13:30.258702 I | cephosd: "/dev/h5b_metadata0/h5b_metadata0_1" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_1") 2024-04-23 16:13:30.258708 I | cephosd: using /dev/h5b_metadata0/h5b_metadata0_1 as metadataDevice for device /dev/sdc and let ceph-volume lvm batch decide how to create volumes 2024-04-23 16:13:30.258713 I | cephosd: configuring new LVM device sde 2024-04-23 16:13:30.258723 I | cephosd: "/dev/h5b_metadata0/h5b_metadata0_3" found in the desired devices (matched by link: "/dev/h5b_metadata0/h5b_metadata0_3") 2024-04-23 16:13:30.258728 I | cephosd: using /dev/h5b_metadata0/h5b_metadata0_3 as metadataDevice for device /dev/sde and let ceph-volume lvm batch decide how to create volumes 2024-04-23 16:13:30.258741 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sde --db-devices /dev/h5b_metadata0/h5b_metadata0_3 --crush-device-class hdd --report 2024-04-23 16:13:30.836886 D | exec: --> passed data devices: 1 physical, 0 LVM 2024-04-23 16:13:30.836924 D | exec: --> relative data size: 1.0 2024-04-23 16:13:30.836928 D | exec: --> passed block_db devices: 0 physical, 1 LVM 2024-04-23 16:13:30.838218 D | exec: Traceback (most recent call last): 2024-04-23 16:13:30.838228 D | exec: File "/usr/sbin/ceph-volume", line 11, in 2024-04-23 16:13:30.838232 D | exec: load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() 2024-04-23 16:13:30.838235 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__ 2024-04-23 16:13:30.838238 D | exec: self.main(self.argv) 2024-04-23 16:13:30.838241 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc 2024-04-23 16:13:30.838244 D | exec: return f(*a, **kw) 2024-04-23 16:13:30.838247 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main 2024-04-23 16:13:30.838250 D | exec: terminal.dispatch(self.mapper, subcommand_args) 2024-04-23 16:13:30.838253 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 2024-04-23 16:13:30.838259 D | exec: instance.main() 2024-04-23 16:13:30.838262 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main 2024-04-23 16:13:30.838265 D | exec: terminal.dispatch(self.mapper, self.argv) 2024-04-23 16:13:30.838268 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 2024-04-23 16:13:30.838271 D | exec: instance.main() 2024-04-23 16:13:30.838275 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root 2024-04-23 16:13:30.838280 D | exec: return func(*a, **kw) 2024-04-23 16:13:30.838283 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 401, in main 2024-04-23 16:13:30.838287 D | exec: plan = self.get_plan(self.args) 2024-04-23 16:13:30.838291 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 438, in get_plan 2024-04-23 16:13:30.838294 D | exec: args.wal_devices) 2024-04-23 16:13:30.838300 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 469, in get_deployment_layout 2024-04-23 16:13:30.838304 D | exec: fast_type) 2024-04-23 16:13:30.838307 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 506, in fast_allocations 2024-04-23 16:13:30.838311 D | exec: ret.extend(get_lvm_fast_allocs(lvm_devs)) 2024-04-23 16:13:30.838314 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 172, in get_lvm_fast_allocs 2024-04-23 16:13:30.838318 D | exec: disk.Size(b=int(d.lvs[0].lv_size)), 1) for d in lvs if not 2024-04-23 16:13:30.838322 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 173, in 2024-04-23 16:13:30.838335 D | exec: d.journal_used_by_ceph] 2024-04-23 16:13:30.838338 D | exec: IndexError: list index out of range 2024-04-23 16:13:30.876135 C | rookcmd: failed to configure devices: failed to initialize osd: failed ceph-volume report: exit status 1 ```
JustinLex commented 1 week ago

Tried to do it with partitions on the metadata device instead of LVs, and now I'm getting an error saying vgcreate: No such file or directory.

provision 2024-04-24 01:22:32.585222 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm prepare --bluestore --data /dev/sde --block.db /dev/nvme1n1p3 --crush-device-class hdd                                                                                                                                                               │
│ provision 2024-04-24 01:22:34.182430 D | exec: --> Incompatible flags were found, some values may get ignored                                                                                                                                                                                                                                                            │
│ provision 2024-04-24 01:22:34.182470 D | exec: --> Cannot use None (None) with --bluestore (bluestore)                                                                                                                                                                                                                                                                   │
│ provision 2024-04-24 01:22:34.182473 D | exec: --> Incompatible flags were found, some values may get ignored                                                                                                                                                                                                                                                            │
│ provision 2024-04-24 01:22:34.182478 D | exec: --> Cannot use --bluestore (bluestore) with --block.db (bluestore)                                                                                                                                                                                                                                                        │
│ provision 2024-04-24 01:22:34.182481 D | exec: Running command: /usr/bin/ceph-authtool --gen-print-key                                                                                                                                                                                                                                                                   │
│ provision 2024-04-24 01:22:34.182501 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 722b0f82-7e03-4795-8767-245fc209bc07                                                                                                                                            │
│ provision 2024-04-24 01:22:34.182506 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts vgcreate --force --yes ceph-365d0ebd-44b9-46a2-8aaa-ad6cef719338 /dev/sde                                                                                                      │
│ provision 2024-04-24 01:22:34.182509 D | exec:  stderr: nsenter: failed to execute vgcreate: No such file or directory                                                                                                                                                                                                                                                   │
│ provision 2024-04-24 01:22:34.182512 D | exec: --> Was unable to complete a new OSD, will rollback changes                                                                                                                                                                                                                                                               │
│ provision 2024-04-24 01:22:34.182516 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it                                                                                                                                                   │
│ provision 2024-04-24 01:22:34.182518 D | exec:  stderr: purged osd.0                                                                                                                                                                                                                                                                                                     │
│ provision 2024-04-24 01:22:34.184380 D | exec: Traceback (most recent call last):                                                                                                                                                                                                                                                                                        │
│ provision 2024-04-24 01:22:34.184393 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 196, in safe_prepare                                                                                                                                                                                                                   │
│ provision 2024-04-24 01:22:34.184397 D | exec:     self.prepare()                                                                                                                                                                                                                                                                                                        │
│ provision 2024-04-24 01:22:34.184401 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184405 D | exec:     return func(*a, **kw)                                                                                                                                                                                                                                                                                                 │
│ provision 2024-04-24 01:22:34.184408 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 254, in prepare                                                                                                                                                                                                                        │
│ provision 2024-04-24 01:22:34.184412 D | exec:     block_lv = self.prepare_data_device('block', osd_fsid)                                                                                                                                                                                                                                                                │
│ provision 2024-04-24 01:22:34.184415 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 165, in prepare_data_device                                                                                                                                                                                                            │
│ provision 2024-04-24 01:22:34.184418 D | exec:     **kwargs)                                                                                                                                                                                                                                                                                                             │
│ provision 2024-04-24 01:22:34.184423 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 979, in create_lv                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184427 D | exec:     vg = create_vg(device, name_prefix='ceph')                                                                                                                                                                                                                                                                            │
│ provision 2024-04-24 01:22:34.184431 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 664, in create_vg                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184435 D | exec:     run_on_host=True                                                                                                                                                                                                                                                                                                      │
│ provision 2024-04-24 01:22:34.184439 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 147, in run                                                                                                                                                                                                                                        │
│ provision 2024-04-24 01:22:34.184443 D | exec:     raise RuntimeError(msg)                                                                                                                                                                                                                                                                                               │
│ provision 2024-04-24 01:22:34.184448 D | exec: RuntimeError: command returned non-zero exit status: 127                                                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184452 D | exec:                                                                                                                                                                                                                                                                                                                           │
│ provision 2024-04-24 01:22:34.184456 D | exec: During handling of the above exception, another exception occurred:                                                                                                                                                                                                                                                       │
│ provision 2024-04-24 01:22:34.184460 D | exec:                                                                                                                                                                                                                                                                                                                           │
│ provision 2024-04-24 01:22:34.184465 D | exec: Traceback (most recent call last):                                                                                                                                                                                                                                                                                        │
│ provision 2024-04-24 01:22:34.184469 D | exec:   File "/usr/sbin/ceph-volume", line 11, in <module>                                                                                                                                                                                                                                                                      │
│ provision 2024-04-24 01:22:34.184473 D | exec:     load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()                                                                                                                                                                                                                                            │
│ provision 2024-04-24 01:22:34.184478 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__                                                                                                                                                                                                                                       │
│ provision 2024-04-24 01:22:34.184482 D | exec:     self.main(self.argv)                                                                                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184486 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184491 D | exec:     return f(*a, **kw)                                                                                                                                                                                                                                                                                                    │
│ provision 2024-04-24 01:22:34.184495 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main                                                                                                                                                                                                                                          │
│ provision 2024-04-24 01:22:34.184499 D | exec:     terminal.dispatch(self.mapper, subcommand_args)                                                                                                                                                                                                                                                                       │
│ provision 2024-04-24 01:22:34.184504 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184508 D | exec:     instance.main()                                                                                                                                                                                                                                                                                                       │
│ provision 2024-04-24 01:22:34.184513 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main                                                                                                                                                                                                                               │
│ provision 2024-04-24 01:22:34.184517 D | exec:     terminal.dispatch(self.mapper, self.argv)                                                                                                                                                                                                                                                                             │
│ provision 2024-04-24 01:22:34.184522 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184526 D | exec:     instance.main()                                                                                                                                                                                                                                                                                                       │
│ provision 2024-04-24 01:22:34.184530 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 327, in main                                                                                                                                                                                                                           │
│ provision 2024-04-24 01:22:34.184535 D | exec:     self.safe_prepare()                                                                                                                                                                                                                                                                                                   │
│ provision 2024-04-24 01:22:34.184539 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 200, in safe_prepare                                                                                                                                                                                                                   │
│ provision 2024-04-24 01:22:34.184545 D | exec:     rollback_osd(self.args, self.osd_id)                                                                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184550 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/common.py", line 35, in rollback_osd                                                                                                                                                                                                                     │
│ provision 2024-04-24 01:22:34.184555 D | exec:     Zap(['--destroy', '--osd-id', osd_id]).main()                                                                                                                                                                                                                                                                         │
│ provision 2024-04-24 01:22:34.184559 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 403, in main                                                                                                                                                                                                                               │
│ provision 2024-04-24 01:22:34.184562 D | exec:     self.zap_osd()                                                                                                                                                                                                                                                                                                        │
│ provision 2024-04-24 01:22:34.184565 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root                                                                                                                                                                                                                                  │
│ provision 2024-04-24 01:22:34.184569 D | exec:     return func(*a, **kw)                                                                                                                                                                                                                                                                                                 │
│ provision 2024-04-24 01:22:34.184575 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 301, in zap_osd                                                                                                                                                                                                                            │
│ provision 2024-04-24 01:22:34.184579 D | exec:     devices = find_associated_devices(self.args.osd_id, self.args.osd_fsid)                                                                                                                                                                                                                                               │
│ provision 2024-04-24 01:22:34.184584 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 88, in find_associated_devices                                                                                                                                                                                                             │
│ provision 2024-04-24 01:22:34.184589 D | exec:     '%s' % osd_id or osd_fsid)                                                                                                                                                                                                                                                                                            │
│ provision 2024-04-24 01:22:34.184606 D | exec: RuntimeError: Unable to find any LV for zapping OSD: 0                                                                                                                                                                                                                                                                    │
│ provision 2024-04-24 01:22:34.217814 C | rookcmd: failed to configure devices: failed to initialize osd: failed ceph-volume: exit status 1                                                                                                                                                                                                                               │
│ Stream closed EOF for rook-ceph/rook-ceph-osd-prepare-latios-fzccg (provision)                                                                      

It seems like Rook is using LVM tools from the host mount rootfs to provision the drives, and this is incompatible with NixOS.

I did follow the prerequisites for NixOS in the docs, but it seems like there are some additional steps needed to make Rook's OSD provisioning work on NixOS. Any advice here?

[jlh@latios:~]$ which vgcreate
/run/current-system/sw/bin/vgcreate

If there's a specific directory where the LVM binaries need to be, I can add symlinks for them with host mounts or Nix configs.

JustinLex commented 1 week ago

I seem to have solved my issue now, by adding an env override for the PATH envvar so that ceph picks up the NixOS bins.

apiVersion: v1
kind: ConfigMap
metadata:
  name: rook-ceph-osd-env-override
data:
  # Default ceph image PATH:
  # /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

  # Default NixOS PATH (user-specific dirs omitted)
  # /run/wrappers/bin:/nix/profile/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin

  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/run/wrappers/bin:/nix/profile/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin"

This now successfully provisions all of my OSDs, with the original LVM configuration I mentioned in my initial post.

Are there any improvements we can make to handle this interaction with NixOS better, improve the error messages, or document the mitigation in the NixOS prerequisites section in the Rook docs?

I'll go ahead and update the title, feel free to change this issue into a feature request .

I'm happy to open a PR for documentation changes. Let me know if this configmap workaround is production-ready or if there's a better workaround available.

travisn commented 1 week ago

Great to hear it is working now. Sounds good to update the docs if you want to open a PR. I would imagine the Prerequisites page would be good to add this info in the existing NixOS section.