Open naisanza opened 7 years ago
@naisanza Can you please clarify what are these env variables and need for the same ?
GLUSTER_PEER_ADDRESS
GLUSTER_BRICK_PATH
Because BRICK paths are mostly dynamicaly created by Heketi based on the request, so the question.
@humblec I'm assuming heketi will need to know of the gluster address and the gluster brick/volume available for heketi to connect to and create its dynamic brick/volumes in
For instance, my gluster server has a brick/volume at /data/brick
that will be available to a gluster client (or heketi). Unless, that's wrong. When I used gluster two years ago, it must have a brick available before anything could touch the filesystem.
I also assume this would be the best way to compartmentalize where heketi will be creating its dynamic bricks in. Like set-and-forget, where the gluster brick path is /data/brick
, and if heketi wanted to create a new brick in /bin
it'll be created in /data/brick/<some unique id>/bin
instead?
@naisanza apologies for the delay here. Heketi want its input in raw disk
( /dev/sdb, /dev/sdc..etc )format. All the other actions like pvcreate/vgcreate..mkfs.
are performed by Heketi at time of volume creation. The FS mount point as input is not expected by Heketi.
@naisanza As @humblec said, heketi makes a few assumptions about the Gluster clusters it manages and the underlying storage devices on the nodes.
To answer your initial inquiry, I see that you already found the official heketi image on Docker Hub. We use that image in our Kubernetes deployment and pass environment variables to it for configuration already. If you want to run heketi in a container, you still need to run heketi-cli -s <heketi_server> topology load --json=<topology_file>
where topology_file
contains the hostnames or IP addresses of the Gluster nodes to include in the cluster and the storage devices on each of the nodes. You can see a sample topology file here.
@humblec @jarrpa So I started looking into the heketi code to find where it determines the block device is suitable to be added. I followed around and stopped here: https://github.com/heketi/heketi/blob/master/pkg/glusterfs/api/types.go
In comparison of glusterfs to ZFS, one reason ZFS relies on block level access is so it has visibility to bitrot, but then ZFS can also create pools with regular files.
I'm wondering if glusterfs will discriminate between a block device and a file that is mounted as a loopback device to be used as a block device? Since a loopback should still be accessible as a block device?
Update: and if it requires to use lvm, it seems lvm should be able to create virtual devices with loopback devices
@humblec @jarrpa Another idea. This documentation states that heketi needs Disks registered with Heketi must be in raw format
You're able to create raw format disk images
with qemu-img create -qf raw tinker.img 3.2G
and then mount the image with losetup
Will that work with heketi?
Update: Also, another way to create a blank disk image http://askubuntu.com/questions/667291/create-blank-disk-image-for-file-storage
@humblec @jarrpa I've hit a wall with this. Without host modification, I won't be able to create loopback devices from within an LXC container
The reason I wanted to run gluster within a container is so that its storage space can be easily increased, as compared to the steps and time needed in resizing an LVM disk image
I'll need to think of something else
@naisanza Are you saying you want Gluster's storage to be within the container itself? To what end?
@jarrpa Gluster is containerized. Heketi is containerized. That'll be the end
I just want to use PVC's in Kubernetes, but:
@naisanza heketi expects storage devices on the hosts running the Gluster pods. Specifically, it expects something in /dev
from the host that can be passed to pvcreate
. This is not something that can be worked around for the foreseeable future if you want to use PVCs to dynamically create GlusterFS volumes.
@jarrpa loopback devices are created under /dev/
(like /dev/loop0)
I didn't want to use KVM's, but I'll try and get it to work in them
@naisanza Right. Apologies for the confusion but it sounded like you were trying to create a loopback device within the container itself, not on the host. If you create a loopback device on the host, that might work?
Therefore, is there an official example to use a loopback device as raw disk
?
You can use (non-persistent) loopback devices with the Gluster container. See https://github.com/gluster/gluster-containers/blob/master/CentOS/README.md#support-for-fake-disks for some details. This is also how minikube can deploy Gluster as a storage provisioner (with disk images stored in a persistent /srv
directory).
@naisanza Thanks! If I create a loopback device manually (with losetup
) and just look them as real block devices, will it work?
@nixpanic that's a nice version of of Gluster
I moved over to Rancher, and Rancher wasn't able to create a Storage Class linked to a container
I'm looking for an official gluster/gluster-heketi docker image to be deployable as a node in kubernetes and not onto the host system.
I'm looking for the following environment variables:
Mostly referencing Kubernetes' documentation on Glusterfs (https://kubernetes.io/docs/user-guide/persistent-volumes/#glusterfs)
Kubernetes can use GlusterFS as a Persistent Volume, so I have a separate GlusterFS server running in LXC on top of a filesystem backend.
I would like the gluster-heketi docker image to be deployable in kubernetes, and have it connect to my GlusterFS LXC instance. It will be used as a proxy for all Gluster-related PV's used by kubernetes