Open JustinLex opened 1 week ago
Tried to do it with partitions on the metadata device instead of LVs, and now I'm getting an error saying vgcreate: No such file or directory
.
provision 2024-04-24 01:22:32.585222 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm prepare --bluestore --data /dev/sde --block.db /dev/nvme1n1p3 --crush-device-class hdd │
│ provision 2024-04-24 01:22:34.182430 D | exec: --> Incompatible flags were found, some values may get ignored │
│ provision 2024-04-24 01:22:34.182470 D | exec: --> Cannot use None (None) with --bluestore (bluestore) │
│ provision 2024-04-24 01:22:34.182473 D | exec: --> Incompatible flags were found, some values may get ignored │
│ provision 2024-04-24 01:22:34.182478 D | exec: --> Cannot use --bluestore (bluestore) with --block.db (bluestore) │
│ provision 2024-04-24 01:22:34.182481 D | exec: Running command: /usr/bin/ceph-authtool --gen-print-key │
│ provision 2024-04-24 01:22:34.182501 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 722b0f82-7e03-4795-8767-245fc209bc07 │
│ provision 2024-04-24 01:22:34.182506 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts vgcreate --force --yes ceph-365d0ebd-44b9-46a2-8aaa-ad6cef719338 /dev/sde │
│ provision 2024-04-24 01:22:34.182509 D | exec: stderr: nsenter: failed to execute vgcreate: No such file or directory │
│ provision 2024-04-24 01:22:34.182512 D | exec: --> Was unable to complete a new OSD, will rollback changes │
│ provision 2024-04-24 01:22:34.182516 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it │
│ provision 2024-04-24 01:22:34.182518 D | exec: stderr: purged osd.0 │
│ provision 2024-04-24 01:22:34.184380 D | exec: Traceback (most recent call last): │
│ provision 2024-04-24 01:22:34.184393 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 196, in safe_prepare │
│ provision 2024-04-24 01:22:34.184397 D | exec: self.prepare() │
│ provision 2024-04-24 01:22:34.184401 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root │
│ provision 2024-04-24 01:22:34.184405 D | exec: return func(*a, **kw) │
│ provision 2024-04-24 01:22:34.184408 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 254, in prepare │
│ provision 2024-04-24 01:22:34.184412 D | exec: block_lv = self.prepare_data_device('block', osd_fsid) │
│ provision 2024-04-24 01:22:34.184415 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 165, in prepare_data_device │
│ provision 2024-04-24 01:22:34.184418 D | exec: **kwargs) │
│ provision 2024-04-24 01:22:34.184423 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 979, in create_lv │
│ provision 2024-04-24 01:22:34.184427 D | exec: vg = create_vg(device, name_prefix='ceph') │
│ provision 2024-04-24 01:22:34.184431 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 664, in create_vg │
│ provision 2024-04-24 01:22:34.184435 D | exec: run_on_host=True │
│ provision 2024-04-24 01:22:34.184439 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 147, in run │
│ provision 2024-04-24 01:22:34.184443 D | exec: raise RuntimeError(msg) │
│ provision 2024-04-24 01:22:34.184448 D | exec: RuntimeError: command returned non-zero exit status: 127 │
│ provision 2024-04-24 01:22:34.184452 D | exec: │
│ provision 2024-04-24 01:22:34.184456 D | exec: During handling of the above exception, another exception occurred: │
│ provision 2024-04-24 01:22:34.184460 D | exec: │
│ provision 2024-04-24 01:22:34.184465 D | exec: Traceback (most recent call last): │
│ provision 2024-04-24 01:22:34.184469 D | exec: File "/usr/sbin/ceph-volume", line 11, in <module> │
│ provision 2024-04-24 01:22:34.184473 D | exec: load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() │
│ provision 2024-04-24 01:22:34.184478 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__ │
│ provision 2024-04-24 01:22:34.184482 D | exec: self.main(self.argv) │
│ provision 2024-04-24 01:22:34.184486 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc │
│ provision 2024-04-24 01:22:34.184491 D | exec: return f(*a, **kw) │
│ provision 2024-04-24 01:22:34.184495 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main │
│ provision 2024-04-24 01:22:34.184499 D | exec: terminal.dispatch(self.mapper, subcommand_args) │
│ provision 2024-04-24 01:22:34.184504 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch │
│ provision 2024-04-24 01:22:34.184508 D | exec: instance.main() │
│ provision 2024-04-24 01:22:34.184513 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main │
│ provision 2024-04-24 01:22:34.184517 D | exec: terminal.dispatch(self.mapper, self.argv) │
│ provision 2024-04-24 01:22:34.184522 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch │
│ provision 2024-04-24 01:22:34.184526 D | exec: instance.main() │
│ provision 2024-04-24 01:22:34.184530 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 327, in main │
│ provision 2024-04-24 01:22:34.184535 D | exec: self.safe_prepare() │
│ provision 2024-04-24 01:22:34.184539 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 200, in safe_prepare │
│ provision 2024-04-24 01:22:34.184545 D | exec: rollback_osd(self.args, self.osd_id) │
│ provision 2024-04-24 01:22:34.184550 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/common.py", line 35, in rollback_osd │
│ provision 2024-04-24 01:22:34.184555 D | exec: Zap(['--destroy', '--osd-id', osd_id]).main() │
│ provision 2024-04-24 01:22:34.184559 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 403, in main │
│ provision 2024-04-24 01:22:34.184562 D | exec: self.zap_osd() │
│ provision 2024-04-24 01:22:34.184565 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root │
│ provision 2024-04-24 01:22:34.184569 D | exec: return func(*a, **kw) │
│ provision 2024-04-24 01:22:34.184575 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 301, in zap_osd │
│ provision 2024-04-24 01:22:34.184579 D | exec: devices = find_associated_devices(self.args.osd_id, self.args.osd_fsid) │
│ provision 2024-04-24 01:22:34.184584 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 88, in find_associated_devices │
│ provision 2024-04-24 01:22:34.184589 D | exec: '%s' % osd_id or osd_fsid) │
│ provision 2024-04-24 01:22:34.184606 D | exec: RuntimeError: Unable to find any LV for zapping OSD: 0 │
│ provision 2024-04-24 01:22:34.217814 C | rookcmd: failed to configure devices: failed to initialize osd: failed ceph-volume: exit status 1 │
│ Stream closed EOF for rook-ceph/rook-ceph-osd-prepare-latios-fzccg (provision)
It seems like Rook is using LVM tools from the host mount rootfs to provision the drives, and this is incompatible with NixOS.
I did follow the prerequisites for NixOS in the docs, but it seems like there are some additional steps needed to make Rook's OSD provisioning work on NixOS. Any advice here?
[jlh@latios:~]$ which vgcreate
/run/current-system/sw/bin/vgcreate
If there's a specific directory where the LVM binaries need to be, I can add symlinks for them with host mounts or Nix configs.
I seem to have solved my issue now, by adding an env override for the PATH envvar so that ceph picks up the NixOS bins.
apiVersion: v1
kind: ConfigMap
metadata:
name: rook-ceph-osd-env-override
data:
# Default ceph image PATH:
# /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Default NixOS PATH (user-specific dirs omitted)
# /run/wrappers/bin:/nix/profile/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/run/wrappers/bin:/nix/profile/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin"
This now successfully provisions all of my OSDs, with the original LVM configuration I mentioned in my initial post.
Are there any improvements we can make to handle this interaction with NixOS better, improve the error messages, or document the mitigation in the NixOS prerequisites section in the Rook docs?
I'll go ahead and update the title, feel free to change this issue into a feature request .
I'm happy to open a PR for documentation changes. Let me know if this configmap workaround is production-ready or if there's a better workaround available.
Great to hear it is working now. Sounds good to update the docs if you want to open a PR. I would imagine the Prerequisites page would be good to add this info in the existing NixOS section.
Is this a bug report or feature request?
Deviation from expected behavior: ceph-volume crashes with an IndexError during OSD initialization job when trying to create a disk osd with a lvm db-device.
My data devices are raw hdd disks with no gpt/mbr, and I'm using a single metadata NVMe device, partitioned with GPT and a single LVM PV. The metadata LVM has 1 LV for each OSD I'm trying to add, for a total of 4 LVs.
I am partitioning the metadata device like this so that I can add/remove OSDs associated with the metadata device in the future without destroying the whole pool.
I referenced the devices manually in CephCluster like so:
I tried both referencing as
/dev/h5b_metadata0/h5b_metadata0_3
and as/dev/mapper/h5b_metadata0-h5b_metadata0_3
, but neither worked.I have included my full CephCluster and provision logs below. From the error message, it seems like ceph-volume can't find the LVs associated with a device?
How to reproduce it (minimal and precise):
Right now I'm just testing things before converting to a prod environment, so this is a new cluster with a single storage node. Everything creates but the storage node fails to provision the OSDs.
File(s) to submit:
CephCluster CR
``` apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: # The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw). image: quay.io/ceph/ceph:v18.2.2 # Whether to allow unsupported versions of Ceph. Currently `quincy` and `reef` are supported. # Future versions such as `squid` (v19) would require this to be set to `true`. # Do not set to true in production. allowUnsupported: false # The path on the host where configuration files will be persisted. Must be specified. # Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster. # In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment. dataDirHostPath: /var/lib/rook # Whether or not upgrade should continue even if a check fails # This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise # Use at your OWN risk # To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/latest/ceph-upgrade.html#ceph-version-upgrades skipUpgradeChecks: false # Whether or not continue if PGs are not clean during an upgrade continueUpgradeAfterChecksEvenIfNotHealthy: false # WaitTimeoutForHealthyOSDInMinutes defines the time (in minutes) the operator would wait before an OSD can be stopped for upgrade or restart. # If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one # if `continueUpgradeAfterChecksEvenIfNotHealthy` is `false`. If `continueUpgradeAfterChecksEvenIfNotHealthy` is `true`, then operator would # continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won't be applied if `skipUpgradeChecks` is `true`. # The default wait timeout is 10 minutes. waitTimeoutForHealthyOSDInMinutes: 10 # Whether or not requires PGs are clean before an OSD upgrade. If set to `true` OSD upgrade process won't start until PGs are healthy. # This configuration will be ignored if `skipUpgradeChecks` is `true`. # Default is false. upgradeOSDRequiresHealthyPGs: false mon: count: 3 allowMultiplePerNode: true # FIXME Temporarily enabled until latias is running mgr: # When higher availability of the mgr is needed, increase the count to 2. # In that case, one mgr will be active and one in standby. When Ceph updates which # mgr is active, Rook will update the mgr services to match the active mgr. count: 2 allowMultiplePerNode: false modules: # List of modules to optionally enable or disable. # Note the "dashboard" and "monitoring" modules are already configured by other settings in the cluster CR. - name: rook enabled: true # enable the ceph dashboard for viewing cluster status dashboard: enabled: true port: 8080 ssl: false monitoring: # Whether to enable the prometheus service monitor enabled: true metricsDisabled: false network: connections: # Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network. # The default is false. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted. # When encryption is not enabled, clients still establish a strong initial authentication and data integrity is still validated with a crc check. # IMPORTANT: Encryption requires the 5.11 kernel for the latest nbd and cephfs drivers. Alternatively for testing only, # you can set the "mounter: rbd-nbd" in the rbd storage class, or "mounter: fuse" in the cephfs storage class. # The nbd and fuse drivers are *not* recommended in production since restarting the csi driver pod will disconnect the volumes. encryption: enabled: true # Whether to compress the data in transit across the wire. The default is false. # See the kernel requirements above for encryption. compression: enabled: true # Whether to require communication over msgr2. If true, the msgr v1 port (6789) will be disabled # and clients will be required to connect to the Ceph cluster with the v2 port (3300). # Requires a kernel that supports msgr v2 (kernel 5.11 or CentOS 8.4 or newer). requireMsgr2: true # enable host networking #provider: host ipFamily: IPv6 # enable the crash collector for ceph daemon crash collection crashCollector: disable: false # Uncomment daysToRetain to prune ceph crash entries older than the # specified number of days. #daysToRetain: 30 # enable log collector, daemons will log on files and rotate logCollector: enabled: true periodicity: daily # one of: hourly, daily, weekly, monthly maxLogSize: 5G # SUFFIX may be 'M' or 'G'. Must be at least 1M. # automate [data cleanup process](https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/ceph-teardown.md#delete-the-data-on-hosts) in cluster destruction. cleanupPolicy: # Since cluster cleanup is destructive to data, confirmation is required. # To destroy all Rook data on hosts during uninstall, confirmation must be set to "yes-really-destroy-data". # This value should only be set when the cluster is about to be deleted. After the confirmation is set, # Rook will immediately stop configuring the cluster and only wait for the delete command. # If the empty string is set, Rook will not destroy any data on hosts during uninstall. confirmation: "" # sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion sanitizeDisks: # method indicates if the entire disk should be sanitized or simply ceph's metadata # in both case, re-install is possible # possible choices are 'complete' or 'quick' (default) method: quick # dataSource indicate where to get random bytes from to write on the disk # possible choices are 'zero' (default) or 'random' # using random sources will consume entropy from the system and will take much more time then the zero source dataSource: zero # iteration overwrite N times instead of the default (1) # takes an integer value iteration: 1 # allowUninstallWithVolumes defines how the uninstall should be performed # If set to true, cephCluster deletion does not wait for the PVs to be deleted. allowUninstallWithVolumes: false # To control where various services will be scheduled by kubernetes, use the placement configuration sections below. # The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and # tolerate taints with a key of 'storage-node'. # placement: # all: # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: role # operator: In # values: # - storage-node # podAffinity: # podAntiAffinity: # topologySpreadConstraints: # tolerations: # - key: storage-node # operator: Exists # The above placement information can also be specified for mon, osd, and mgr components # mon: # Monitor deployments may contain an anti-affinity rule for avoiding monitor # collocation on the same node. This is a required rule when host network is used # or when AllowMultiplePerNode is false. Otherwise this anti-affinity rule is a # preferred rule with weight: 50. # osd: # prepareosd: # mgr: # cleanup: placement: # Allow placing monitors on control plane to maintain quorum mon: tolerations: - effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: Exists annotations: # all: # mon: # osd: # cleanup: # prepareosd: # clusterMetadata annotations will be applied to only `rook-ceph-mon-endpoints` configmap and the `rook-ceph-mon` and `rook-ceph-admin-keyring` secrets. # And clusterMetadata annotations will not be merged with `all` annotations. # clusterMetadata: # kubed.appscode.com/sync: "true" # If no mgr annotations are set, prometheus scrape annotations will be set by default. # mgr: labels: # all: # mon: # osd: # cleanup: # mgr: # prepareosd: # These labels are applied to ceph-exporter servicemonitor only # exporter: # monitoring is a list of key-value pairs. It is injected into all the monitoring resources created by operator. # These labels can be passed as LabelSelector to Prometheus # monitoring: # crashcollector: resources: #The requests and limits set here, allow the mgr pod to use half of one CPU core and 1 gigabyte of memory mgr: limits: memory: "1024Mi" requests: cpu: "100m" memory: "1024Mi" mon: limits: memory: "2048Mi" requests: cpu: "100m" memory: "2048Mi" osd: limits: memory: "2048Mi" requests: cpu: "200m" memory: "2048Mi" # For OSD it also is a possible to specify requests/limits based on device class # osd-hdd: # osd-ssd: # osd-nvme: # prepareosd: # mgr-sidecar: # crashcollector: # logcollector: # cleanup: # exporter: # The option to automatically remove OSDs that are out and are safe to destroy. removeOSDsIfOutAndSafeToRemove: false priorityClassNames: #all: rook-ceph-default-priority-class mon: system-node-critical osd: system-node-critical mgr: system-cluster-critical #crashcollector: rook-ceph-crashcollector-priority-class storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false #deviceFilter: config: # crushRoot: "custom-root" # specify a non-default root label for the CRUSH map # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore. # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB # osdsPerDevice: "1" # this value can be overridden at the node or device level # encryptedDevice: "true" # the default value for this option is "false" # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label. nodes: - name: latios devices: # specific devices to use for storage can be specified for each node # - name: "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1DL3AJY" # WD Red 3TB # FIXME Left out for now to test adding storage - name: "/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5CLHYZF" # WD Red 3TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_3" encryptedDevice: "true" - name: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TDVUD1" # WD Red 4TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_2" encryptedDevice: "true" - name: "/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6YX6UXN" # WD Red 4TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_1" encryptedDevice: "true" - name: "/dev/disk/by-id/ata-TOSHIBA_MG08ACA16TE_6180A1PCFVGG" # Toshiba MG08 16TB config: metadataDevice: "/dev/h5b_metadata0/h5b_metadata0_0" encryptedDevice: "true" # - name: "nvme01" # multiple osds can be created on high performance devices # config: # osdsPerDevice: "5" # config: # configuration can be specified at the node level which overrides the cluster level config # when onlyApplyOSDPlacement is false, will merge both placement.All() and placement.osd onlyApplyOSDPlacement: false # Time for which an OSD pod will sleep before restarting, if it stopped due to flapping # flappingRestartIntervalHours: 24 # The section for configuring management of daemon disruptions during upgrade or fencing. disruptionManagement: # If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically # via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will # block eviction of OSDs by default and unblock them safely when drains are detected. managePodBudgets: true # A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the # default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes. osdMaintenanceTimeout: 30 # A duration in minutes that the operator will wait for the placement groups to become healthy (active+clean) after a drain was completed and OSDs came back up. # Operator will continue with the next drain if the timeout exceeds. It only works if `managePodBudgets` is `true`. # No values or 0 means that the operator will wait until the placement groups are healthy before unblocking the next drain. pgHealthCheckTimeout: 0 # csi defines CSI Driver settings applied per cluster. csi: readAffinity: # Enable read affinity to enable clients to optimize reads from an OSD in the same topology. # Enabling the read affinity may cause the OSDs to consume some extra memory. # For more details see this doc: # https://rook.io/docs/rook/latest/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-read-affinity-for-rbd-volumes enabled: true # cephfs driver specific settings. cephfs: # Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. # kernelMountOptions: "" # Set CephFS Fuse mount options to use https://docs.ceph.com/en/quincy/man/8/ceph-fuse/#options. # fuseMountOptions: "" # healthChecks # Valid values for daemons are 'mon', 'osd', 'status' healthCheck: daemonHealth: mon: disabled: false interval: 45s osd: disabled: false interval: 60s status: disabled: false interval: 60s # Change pod liveness probe timing or threshold values. Works for all mon,mgr,osd daemons. livenessProbe: mon: disabled: false mgr: disabled: false osd: disabled: false # Change pod startup probe timing or threshold values. Works for all mon,mgr,osd daemons. startupProbe: mon: disabled: false mgr: disabled: false osd: disabled: false ```Logs to submit:
See comment.
Environment:
uname -a
): 6.1.87lsblk:
lvs:
pvs: