Open kubecto opened 1 month ago
Rook only supports bluestore, so you can be sure they are all running bluestore. Rook creates OSDs with ceph-volume
in "raw" mode, which means the device or partition is directly used, and no evidence seen in lsblk
.
ceph deployment requires log disk and data disk, how to distinguish this
I have tried lsblk /dev/sdb and created a new partition /dev/sdb1, but it does not affect the use. In this case, what if someone uses this disk? Maybe even someone does not know that this disk has been used by ceph rook, because it is used at the bottom. Others also don't know if this data disk is being used, and it doesn't look very friendly
ceph deployment requires log disk and data disk, how to distinguish this
Which disks do you mean? Each OSD only requires one disk.
https://www.ibm.com/docs/en/storage-ceph/5?topic=bluestore-ceph-devices
This place has said that you can create multiple devices to store log devices, and some DB and other data
https://www.ibm.com/docs/en/storage-ceph/5?topic=bluestore-ceph-devices
This place has said that you can create multiple devices to store log devices, and some DB and other data
Yes, that is an option, it is just not the default. Try searching the Rook docs for "metadataDevice"
I found the file here,
https://rook.io/docs/rook/latest-release/CRDs/Cluster/host-cluster/?h=metadatadevice#specific-nodes-and-devices
According to the comment information, there is no introduction to how to use the distinction between data disks, log disks, and db disks, but there is an additional partition, and a udev device
Is this a bug report or feature request?
I used three nodes as ceph nodes, and sdb as the data storage disk of ceph,But why can't I see lsblk partitions for ceph to use after installing rook
I can see that the container for rook-ceph-osd-0 is already set to the bluestore type and uses sdb
I can see the status of the osd and do currently reference the size of the disk on my three nodes, but why can't lsblk see traces of ceph use
this is my cluster.yaml
I am not sure how to manage rook because I have three data disks. Is the current situation correct? Besides, log disks are required for ceph deployment in production, how should I plan log disks for three nodes and declare them in cluster.yaml
Deviation from expected behavior: Expect to explain
Expected behavior:
How to reproduce it (minimal and precise):
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use theinsert code
button from the Github UI. Read GitHub documentation if you need help.Cluster Status to submit:
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, usekubectl rook-ceph ceph status
For more details, see the Rook kubectl PluginEnvironment:
uname -a
):rook version
inside of a Rook Pod):rook-1.10.12ceph -v
): ceph version 17.2.5kubectl version
):1.28.6ceph health
in the Rook Ceph toolbox):