Closed bend closed 6 years ago
As per log, gluster pods still found. You need to delete all pods first.
You can try kubectl get all and delete all resources.
Also, by passing --abort you can delete existing resources.
Yes it works now, however I have now this error: mount: /var/lib/kubelet/pods/ae815ad6-ac5a-11e8-bbc2-fa163eec9a70/volumes/kubernetes.io~glusterfs/heketi-storage: unknown filesystem type 'glusterfs'.
Do I need to install glusterfs-fuse on the host machine ? Is there any package required to be installed on the nodes host machines ?
Do I need to install glusterfs-fuse on the host machine ?
GlusterFS client version installed on nodes should be as close as possible to the version of the server.
Please check https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md
Ok It worked. Thank you
@bend Glad it helped - If you think there is any need to improve anything specific in documentation, let us know. Also, feel free to send Pull requests :)
I want to reset the glusterfs topology because one of my server changed IP.
So I tried to delete everything related to glusterfs:
kubectl delete -n gluster svc,deployments,daemonset,pods,pv,pvc --all
I also deleted and recreated the volumes on all the nodes I delete the /etc/glusterfs directory on all the nodes I've edited the topology to match the new one (one IP changed)
I then ran:
./gk-deploy -n gluster -w 900 -g -y topology.json
But the still get the error:
The IP displayed in the error message does not match the one in my topology.json file.
Is there something I forgot to delete in order to completely reset the glusterfs cluster ? I don't care about the data as I'm still trying to setup a cluster.