Hi guys, I could not find a place where to ask this question, so writing here.
In short: etcd-manager doesn't start etcd process (or starts wrong version) causing etcd to be in unhealthy state.
I'm running k8s 1.12.8 with etcdv2 on AWS and planning upgrade to etcdv3. Adding provider=Manager to the kops cluster manifest and perform cluster upgrade. The process requires node rolling-update of the nodes and when new master node comes up:
master node 1: came up as expected: 2 etcd-managers (main and events) and 2 etcd processes
master node 2: came up with 2 etcd-managers (main and events) but only 1 etcd process started (in etcd-manager container "events")
master node 3: came up with 2 etcd-managers (main and events) but only 1 etcd process started (in etcd-manager container "main")
Cluster validation fails with etcd-0 or etcd-1 unhealthy error.
When I was upgrading previous clusters few restarts of master node would help to start proper configuration.
Now I've restarted failed master nodes dozens times, they keep coming up in this configuration.
Please help! What else I can do to make sure that all etcd processes are started and etcd is in healthy state?
Hi guys, I could not find a place where to ask this question, so writing here.
In short: etcd-manager doesn't start etcd process (or starts wrong version) causing etcd to be in unhealthy state. I'm running k8s 1.12.8 with etcdv2 on AWS and planning upgrade to etcdv3. Adding
provider=Manager
to the kops cluster manifest and perform cluster upgrade. The process requires node rolling-update of the nodes and when new master node comes up:When I was upgrading previous clusters few restarts of master node would help to start proper configuration.
Now I've restarted failed master nodes dozens times, they keep coming up in this configuration.
Please help! What else I can do to make sure that all etcd processes are started and etcd is in healthy state?