gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
875 stars 389 forks source link

How to find the mount point on node for a dynamic PVC? #519

Closed shaozi closed 5 years ago

shaozi commented 6 years ago

PVC is created by specify a glusterfs storage class. Is there a way to find out where is the mount point on the node?

nixpanic commented 6 years ago

It is not mounted until a pod uses it in its volume list. When kubelet starts the pod, it will mount the Gluster volume on the host where the pod is going to run. Once the pod exists, the volume is unmounted again.

See the Persistent Volumes concept for more details.

shaozi commented 6 years ago

My user case is when I am having a problem when a mysql cluster died, I have to go to the database volume to manually edit a file to fix the cluster. If I can know where the PV is on the node host (if it is possible), then I can launch vi and fix it from the kubernetes node.

nixpanic commented 5 years ago

The path where OpenShift mounts the Gluster Volume on the host is something like this:

/var/lib/origin/openshift.local.volumes/pods/1eebe57d-b81e-11e8-8379-525400a4b9f3/volumes/kubernetes.io~glusterfs/db

The UUID is the pod:metadata:uuid from the running pod. The /db at the end is the spec:volumes:glusterfs:name.

For Kubernetes it is something very similar.

Alternatively you can manually mount the volume on one of the hosts. Get the PV for the PVC that holds the data, and inspect it. The spec:glusterfs:path is the name of the Gluster volume, and the spec:glusterfs:endpoints point you to the Endpoint object with the servernames (but you can use any Gluster server).

nixpanic commented 5 years ago

@shaozi have your questions been answered? If so, please close this issue. Thanks!

shaozi commented 5 years ago

I end up with create a new pod to access the PVC.

vvavepacket commented 4 years ago

@nixpanic @shaozi Related question - Does the PV always talk to the gluster node on the same host gluster node or can it talk to a remote gluster node? Let's say there are 3 gluster nodes/endpoints running provided in the gluster service, which of them does my pod talk to? Is it based on the same host, or it picks the one with the lowest latency?

nixpanic commented 4 years ago

Replication in Gluster (AFR) is done client side (mounted PVC), so the client talks to all Gluster servers that participate in the Gluster Volume (PV consisting out of bricks). Reads are done from the brick (mounted filesystem on a Gluster server) that responds first (often the local one), writes are sent to all bricks.