gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
874 stars 390 forks source link

Documentation incomplete? #515

Open nicolas-goudry opened 5 years ago

nicolas-goudry commented 5 years ago

Hello,

I’m trying to setup GlusterFS into my K8S infrastructure but can’t find anywhere how to create/configure the three nodes required by install guide... Can you please help me?

Thanks in advance!

nicolas-goudry commented 5 years ago

I just understood that I confounded nodes and pods

So, I updated my cluster with kops to add 3 new nodes. Everything went well but now some of my deployments also uses these 3 new nodes… How to « reserve » those 3 nodes for GlusterFS?

~Also, how to attach raw block devices to nodes? There’s no mention about this in K8S doc on nodes… But there is some mention in K8S volumes documentation.~

~Last question: how to bash into K8S nodes? Seems impossible, but doc says to run some iptables commands…~

It seems I should attach an EBS volume to an EC2 instance and run commands by SSH on EC2 instance? This really is unclear to me…

I’m kinda lost on this… I think the documentation is missing some details about deployment… Or maybe is it me who doesn’t understand how this works?

Please help :sob: :sob:

phlogistonjohn commented 5 years ago

I just understood that I confounded nodes and pods…

So, I updated my cluster with kops to add 3 new nodes. Everything went well but now some of my deployments also uses these 3 new nodes… How to « reserve » those 3 nodes for GlusterFS?

gluster-kubernetes (and thus gk-deploy) doesn't really care much about what other stuff might be running on your nodes. This is not a requirement of gluster. If you really do want to dedicate nodes to gluster you'll probably want to use the standard kubernetes appraoches for this. I'd start out by reviewing: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ and https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

But you may want to start off small and do a basic deployment where other app pods will run on the same node. (This is sometimes called a converged or hyperconverged scenario).

Alternatively, you can create standalone gluster nodes (not automatically set up by gk-deploy) and make those truly dedicated nodes for gluster storage. However, these nodes will not be part of your k8s cluster with all the advantages and disadvantages that brings.

Also, how to attach raw block devices to nodes? There’s no mention about this in K8S doc on nodes… But there is some mention in K8S volumes documentation.

This project predates all of the support for block volumes in k8s, so you can ignore any of that stuff (for now). The thing is that the /dev of the node will be shared with the gluster pod. So any block devices that show up in the node will be usable by the heketi/glusterfs system gk-deploy will create. So use whatever cloud provider / vm / bare metal approach is appropriate for your node and the pod will be able to use that device.

Last question: how to bash into K8S nodes? Seems impossible, but doc says to run some iptables commands…

That depends on your provider again. If you were running bare metal, for instance, you'd just ssh into each node by IP or dns name. If you're on a cloud provider that supports extra magic, just use that.

It seems I should attach an EBS volume to an EC2 instance and run commands by SSH on EC2 instance? This really is unclear to me…

So if you're in EC2 then attaching an EBS volume should create a new device file in /dev. Let's pretend it creates /dev/sdc. In your topology file, you'd then need to specify /dev/sdc for your nodes. Remember, that this would be three separate EBS volumes, one attached to each node.

I’m kinda lost on this… I think the documentation is missing some details about deployment… Or maybe is it me who doesn’t understand how this works?

Please help

I think the documentation is just being very provider agnostic as this system works in many environments, such as bare metal, vm, & cloud. It tries not to get too environment specific. If you look at how the tests & vagrant demo environment work in this repo you'll see it create a set of nodes with disks attached to the node (ignoring the master). K8s itself ignores these devices but when gluster is & heketi is provisioned it will start using them.