Closed cwen0 closed 5 years ago
This is only suitable for local
PV. However, other PVs don't bind to particular Node.
@weekface Can we provide a way to monitor the status of the local
PV ?
Use this command to get the custom columns:
kubectl get pv -l app.kubernetes.io/managed-by=tidb-operator -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0],PATH:.spec.local.path
The output is:
NAME NODE PATH
local-pv-168a01a7 172.16.1.2 /mnt/disks/disk1
local-pv-16cb93c9 172.16.1.3 /mnt/disks/disk2
local-pv-174bce80 172.16.1.4 /mnt/disks/disk3
local-pv-17514b6f 172.16.1.5 /mnt/disks/disk4
@cwen0
Using custom columns works great! But it's better to add claim and claim namespace. From claim name and namespace, users can know what pod occupies the underlining PV.
kubectl get pv -l app.kubernetes.io/managed-by=tidb-operator -o=custom-columns='NAMESPACE:spec.claimRef.namespace,CLAIM:.spec.claimRef.name,NAME:.metadata.name,NODE:.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0],PATH:.spec.local.path'
We should add this to the operation documents, so users using local pv can easily know where the data directory is for a specific pod.
On GKE at least I see the node name in the annotation
pv.kubernetes.io/provisioned-by: local-volume-provisioner-gke-beta-tidb-n1-standard-4-375-33507f39-rmkn-c233be3c-6068-11e9-9941-4201ac1f400a
Yes, for local volume there's a clear way to get the node name and directory path via the PV's nodeAffinity
field.
Use tkctl get volume
instead: https://github.com/pingcap/tidb-operator/blob/master/docs/cli-manual.md#tkctl-get-component
Sometimes, we need to count the status of pvs on special nodes by
kubectl
. Although we have added this information tonodeAffinity
, this is not convenient for us to monitor bykubectl
.