Closed timfpark closed 7 years ago
Hi Tim, Unfortunately yes. I mentioned it in the readme for this chart and created an issue in kubernetes/charts#685
I didn't try it, but theoretically, it should be possible to manually delete lost etcd members from the cluster and then scale up the cluster.
Thanks for your answer and sorry for missing it in the README
For anyone coming here in future. You can use this config to create the etcd cluster instead (tested only on GKE). The pull request here kubernetes/charts#685 didn't work for me on GKE.
Workaround for this without loosing you data or recreating your whole cluster. Use helm to scale down your cluster by 1 node helm upgrade etcd incubator/etcd --set replicas=2 Wait for few minutes and all nodes will do rolling restart. Scale it back up and voila :)
I've been running a Stolon cluster for about a week (very successfully) but today I noticed that I have lost a etcd pod completely and another is a CrashLoopBackOff cycle:
The logs for postgresql-etcd-0 are the following:
Have you seen this before? Is there anyway to manually restart the etcd portion of the cluster easily?