gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
875 stars 389 forks source link

What format should the underlying "raw format device be in?" #393

Closed bravecorvus closed 6 years ago

bravecorvus commented 6 years ago

Before I start, thanks for the help last time.

Now I'm back to the grind with some fresh hardware (2 SATA internal HDD's for the worker nodes, and a usb external drive for the master node, all on Ubuntu 16.04 LTS).

To start, I attempted to format all the drives as ext4 via the command mkfs.ext4 /dev/sdb. However, during deployment, the script hangs up on creating the first node.

Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount "heketi-service-account" created
clusterrolebinding "heketi-sa-view" created
clusterrolebinding "heketi-sa-view" labeled
OK
node "kraken" labeled
node "kraken01" labeled
node "kraken02" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... ^[^[OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
deployment "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: e75792262e403db1cfcfbebdd6894f54
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node kraken ... ID: 49c65133271a827fff1b1e1d8315bdf3
^C⏎
 !  ~/gluster-kubernetes   master *…  deploy  cd ~/gluster-kubernetes/deploy; and ./gk-deploy -gy --abort                                                                                    11.1m  Tue 14 Nov 2017 07:10:26 PM CST
Using Kubernetes CLI.
Using namespace "default".
deployment "deploy-heketi" deleted
pod "deploy-heketi-5c45f969bd-zsd6m" deleted
service "deploy-heketi" deleted
secret "heketi-config-secret" deleted
serviceaccount "heketi-service-account" deleted
clusterrolebinding "heketi-sa-view" deleted
No resources found
node "kraken" labeled
node "kraken01" labeled
node "kraken02" labeled
daemonset "glusterfs" deleted
 !  ~/gluster-kubernetes   master *…  deploy  cd ~/gluster-kubernetes/deploy; and ./gk-deploy -gy topology.json                                                                               1.3m  Tue 14 Nov 2017 07:18:49 PM CST
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount "heketi-service-account" created
clusterrolebinding "heketi-sa-view" created
clusterrolebinding "heketi-sa-view" labeled
OK
node "kraken" labeled
node "kraken01" labeled
node "kraken02" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
deployment "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 4a985ab2336cdab165dc3f500d29bbb6
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node kraken ... ID: 81ad9ef7ce077169432aafc4a2814455

Next, since all the documents regarding setting up GlusterFS directly on top of bare metal says the underlying structure should be xfs, i reformatted the drives as xfs using the following commands:

$sudo su
$mkfs.xfs /dev/sdb
$fdisk /dev/sdb
Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

/dev/sdb: device contains a valid 'xfs' signature; it is strongly recommended to wipe the device with wipefs(8) if this is unexpected, in order to avoid possible collisions

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x177938dc.

Command (m for help): wipefs /dev/sdb
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

where wipefs /dev/sdb is a command I typed

However, I get the same exact hang up when running ./gk-deploy -gy topology.json.

What should I format the underlying storage device to be?

jarrpa commented 6 years ago

When we mean "raw block devices" we mean there should be no formatting at all. No partitions, no filesystem, no LVM artifacts, nothing. We suggest doing wipefs -a to make sure you get everything.

JohnStrunk commented 6 years ago

The raw devices should be just that: an unpartitioned raw block device. Heketi will take care of everything on top of that.

On Nov 14, 2017 9:02 PM, "Andrew Lee" notifications@github.com wrote:

Before I start, thanks for the help last time.

Now I'm back to the grind with some fresh hardware (2 SATA internal HDD's for the worker nodes, and a usb external drive for the master node, all on Ubuntu 16.04 LTS).

To start, I attempted to format all the drives as ext4 via the command mkfs.ext4 /dev/sdb. However, during deployment, the script hangs up on creating the first node.

Using Kubernetes CLI. Using namespace "default". Checking for pre-existing resources... GlusterFS pods ... not found. deploy-heketi pod ... not found. heketi pod ... not found. gluster-s3 pod ... not found. Creating initial resources ... serviceaccount "heketi-service-account" created clusterrolebinding "heketi-sa-view" created clusterrolebinding "heketi-sa-view" labeled OK node "kraken" labeled node "kraken01" labeled node "kraken02" labeled daemonset "glusterfs" created Waiting for GlusterFS pods to start ... ^[^[OK secret "heketi-config-secret" created secret "heketi-config-secret" labeled service "deploy-heketi" created deployment "deploy-heketi" created Waiting for deploy-heketi pod to start ... OK Creating cluster ... ID: e75792262e403db1cfcfbebdd6894f54 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node kraken ... ID: 49c65133271a827fff1b1e1d8315bdf3 ^C⏎ !  ~/gluster-kubernetes   master …  deploy  cd ~/gluster-kubernetes/deploy; and ./gk-deploy -gy --abort 11.1m  Tue 14 Nov 2017 07:10:26 PM CST Using Kubernetes CLI. Using namespace "default". deployment "deploy-heketi" deleted pod "deploy-heketi-5c45f969bd-zsd6m" deleted service "deploy-heketi" deleted secret "heketi-config-secret" deleted serviceaccount "heketi-service-account" deleted clusterrolebinding "heketi-sa-view" deleted No resources found node "kraken" labeled node "kraken01" labeled node "kraken02" labeled daemonset "glusterfs" deleted !  ~/gluster-kubernetes   master …  deploy  cd ~/gluster-kubernetes/deploy; and ./gk-deploy -gy topology.json 1.3m  Tue 14 Nov 2017 07:18:49 PM CST Using Kubernetes CLI. Using namespace "default". Checking for pre-existing resources... GlusterFS pods ... not found. deploy-heketi pod ... not found. heketi pod ... not found. gluster-s3 pod ... not found. Creating initial resources ... serviceaccount "heketi-service-account" created clusterrolebinding "heketi-sa-view" created clusterrolebinding "heketi-sa-view" labeled OK node "kraken" labeled node "kraken01" labeled node "kraken02" labeled daemonset "glusterfs" created Waiting for GlusterFS pods to start ... OK secret "heketi-config-secret" created secret "heketi-config-secret" labeled service "deploy-heketi" created deployment "deploy-heketi" created Waiting for deploy-heketi pod to start ... OK Creating cluster ... ID: 4a985ab2336cdab165dc3f500d29bbb6 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node kraken ... ID: 81ad9ef7ce077169432aafc4a2814455

Next, since all the documents regarding setting up GlusterFS directly on top of bare metal says the underlying structure should be xfs, i reformatted the drives as xfs using the following commands:

$sudo su $mkfs.xfs /dev/sdb $fdisk /dev/sdb Welcome to fdisk (util-linux 2.27.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command.

/dev/sdb: device contains a valid 'xfs' signature; it is strongly recommended to wipe the device with wipefs(8) if this is unexpected, in order to avoid possible collisions

Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x177938dc.

Command (m for help): wipefs /dev/sdb The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.

where wipefs /dev/sdb is a command I typed

However, I get the same exact hang up when running ./gk-deploy -gy topology.json.

What should I format the underlying storage device to be?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/gluster/gluster-kubernetes/issues/393, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKFIOO5r2uFaeznApTkswb4WnUhpTWyks5s2kZHgaJpZM4QeR67 .

bravecorvus commented 6 years ago

For now, this seems to work (i.e., ./gk-deploy can initialize all the nodes without a problem)

bravecorvus commented 6 years ago

I have no further issues arising from GlusterFS! Thanks such much @jarrpa and @JohnStrunk for all your help!

jayunit100 commented 6 years ago

What if your on a cluster with no externally mounted , clean raw devices.
Since we're in containers, can gluster just use tmpfs on disk or something , from inside the containerS?

JohnStrunk commented 6 years ago

Heketi assumes there is a raw device on which LVM can be used to carve bricks for volumes. There isn't really a way of just giving the Gluster pod a file system and still using dynamic provisioning.

You have a couple options:

phlogistonjohn commented 6 years ago

I think that @ansiwen has recently been using loopback devices successfully.

(And to be extra silly for a moment, heketi doesn't really care what's beneath the block device so you could choose put lvm on your loopback device and expose an lvm lv to heketi! I just did it to prove to myself it would work -- but perhaps this is a don't-try-this-at-home scenario :-))