Closed Scukerman closed 7 years ago
Right, this is due to changes in kube 1.6.
PR #236 fixes this.
@obnoxxx oh, thx. I checked issues and PRs yesterday and haven't seen this.
Update: I'll check it and close the issue if it works for me.
@Scukerman I figured it out last night ... :-)
There is also an updated vagrant-based test environment in PR #227 (update to kube 1.6.1).
PR #225 adds tests to run in the vagrant env (essentially implementing the quickstart-guide and dynamic provisioning example). The gk-deploy succeeds with the addition of #236, but I am still having problems with the pvc creation: pvc remains in pending state and the request never reaches heketi. Any insights appreciated...
It worked like a charm! Thx a lot, @obnoxxx. I was thinking about clusterrolebinding but I'm not as smart as you at kubernetes. P.S. I used kubeadm to build up a cluster. P.P.S. master branch (commit 3c154c608135f9c0878ade246acd5b32053da7e0) + PR #236
@obnoxxx after deploying I ran into what you said.
heketi-cli topology info
outputs nothing.
I loaded topology one more time and now I can see the cluster, but devices weren't added because of
Adding device /dev/sdd ... Unable to add device: Unable to execute command on glusterfs-6jdcf: Can't open /dev/sdd exclusively. Mounted filesystem?
I can't remove vgs because they are mounted and in use in gluster pods.
Yeah! I made it! I wiped devices and reloaded topology.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
hello-world Bound pvc-b4875952-1ad6-11e7-b35d-0cc47ac5569a 5Gi RWO glusterfs 11s
@Scukerman, exactly you need to wipe the devices (or the whole vms...)
er, ... how did you get the hello-world into bound state? This is the thing I am currently struggling with.. Could paste your pvc yaml file, please?
@obnoxxx I don't know. I got my topology info
working and just deployed this manifest:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hello-world
annotations:
volume.beta.kubernetes.io/storage-class: glusterfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
@Scukerman thanks. THis does not look special... will test again. it gives me hope ;-)
Oh, and could you also show your storage class (glusterfs)? @Scukerman
@obnoxxx, it looks pretty standard.
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.105.47.226:8080"
Update: removed extra info. Nevermind, it all works
@Scukerman indeed, my storageclass had an error: it mentioned endpoint alongside resturl, which is wrong. But in kube versions < 1.6 this was not a problem... After i removed endpoint, it worked like a charm...
@obnoxxx according to this https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md it is not necessary anymore. And, like you said, it is a problem for 1.6 now.
P.S. Didn't know. I'm using gluster since k8s 1.5 I guess. P.P.S. Anyways, you're welcome :)
@Scukerman Hi,Scukerman,I meet the same error as you have before dding device /dev/sdd ... Unable to add device: Unable to execute command on glusterfs-6jdcf: Can't open /dev/sdd exclusively. Mounted filesystem?
I use the USB driver as the glusterfs device,How did you clear the error?
@Liangming666 The block device must be bare, meaning that it is not mounted, has no filesystem, no partitions, and no LVM volume data. The easiest way to ensure this is to unmount any volumes from the device and then do wipefs -a /dev/sdd
(or whatever name your device has).
If you can't wipefs the device, remove it and run a rescan. see here
$lsscsi
[2:2:0:0] disk DELL PERC H700 2.10 /dev/sda
[2:2:1:0] disk DELL PERC H700 2.10 /dev/sdc
$ echo 1 > /sys/class/scsi_device/2\:2\:1\:0/device/delete
$ echo "- - -" > /sys/class/scsi_host/hostX/scan
Note: where X in 0,1,2,3,4 ....
gluster-kubernetes: I used the latest release and master branch. I don't see the difference.
stdout log:
My topology.json: