vigevenoj / k8sharkbait

Spins up a kubernetes cluster with applications on Linodes from scratch
3 stars 1 forks source link

Upgrade via node removal+replacement breaks storage #5

Closed vigevenoj closed 3 years ago

vigevenoj commented 6 years ago

Removed node 3 and rebuilt it as ubuntu 17.10. The node's gluster uuid changed, so the other two nodes are rejecting the rebuilt node (or 3 is rejecting the other node), and the volumes are no longer mountable.

I think the process outlined in https://github.com/heketi/heketi/issues/635 is probably what I want to do, but if I can't get the system to be recovered, I think mounting the info as described in https://blog.lwolf.org/post/how-to-recover-data-from-broken-glusterfs-cluster/ again and recovering enough to rebuild the postgres service's databases, the heketi boltdb database, the kanboard plugin directories, and traefik's acme.json will be enough to get the system back to where it was prior to the failure by restoring all that into the right places after recovery

vigevenoj commented 3 years ago

I'm closing this issue because I'm no longer using Heketi or GlusterFS. I'll update #2 to reflect this change and make the changes to the bootstrap process to no longer install Heketi or GlusterFS, and use the Linode cloud controller manager and Linode blockstorage csi driver