Azure / ACS

Azure Container Service - Bug Tracker + Announcements
65 stars 27 forks source link

Cluster broken after upgrade to K8s 1.8.9 #112

Open tomkosse opened 6 years ago

tomkosse commented 6 years ago

Is this a request for help?: Yes!


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

This is a bug report Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm) Kubernetes (originally 1.7.x, but upgraded to 1.8.4 a while ago. Today i upgraded to 1.8.9)

What happened: All my three masters stopped working. etcd is unable to start and form a cluster. etcd was still working in 2.3, but now tries to resume in version 3.2.16. This is unsuccessful because the snapshots are not compatible.

What you expected to happen: A working cluster

How to reproduce it (as minimally and precisely as possible): I'm not exactly sure what causes this to go wrong, but i think i skipped a step in upgrading etcd from 2.3 to 3.2. This direct upgrade path is not straightforward or supported by etcd.

Anything else we need to know: I'm on Azure Germany.

I've tried installing etcd 3.0.17 first and upgrading my way up. This has not succeeded, because the masters don't seem to be able to "talk" to eachother on this version.

tomkosse commented 6 years ago

This should have been on the ACS-Engine repository. My apologies