kubernetes / website

Kubernetes website and documentation repo:
https://kubernetes.io
Creative Commons Attribution 4.0 International
4.59k stars 14.48k forks source link

Control plane failure modes for high-availability documentation #43849

Open royalsflush opened 1 year ago

royalsflush commented 1 year ago

We likely need some brief documentation on what customers can expect in terms of the reliability of the control plane. We discussed the "majority" vs "less than majority" buckets of problems, would be great to have documentation that we can point to, in order to justify our reliability stance

k8s-ci-robot commented 1 year ago

There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:

Please see the group list for a listing of the SIGs, working groups, and committees available.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
k8s-ci-robot commented 1 year ago

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
neolit123 commented 1 year ago

/transfer website

where the k8s documentation is located.

neolit123 commented 1 year ago

We likely need some brief documentation on what customers can expect in terms of the reliability of the control plane. We discussed the "majority" vs "less than majority" buckets of problems, would be great to have documentation that we can point to, in order to justify our reliability stance

when speaking about "majority" is this about etcd's raft algorithm? k8s core doesn't have this requirement directly. also, when / where was this discussed?

neolit123 commented 1 year ago

/kind feature /triage needs-information /sigs docs (tagging with docs until owner is established, if ever)

sftim commented 1 year ago

It'd be good to understand the gaps: what should https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/ cover that it doesn't?

neolit123 commented 1 year ago

/close

the ticket has missing information; questions were not answered. please update and re-open.

k8s-ci-robot commented 1 year ago

@neolit123: Closing this issue.

In response to [this](https://github.com/kubernetes/website/issues/43849#issuecomment-1809963350): >/close > >the ticket has missing information; questions were not answered. >please update and re-open. > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
royalsflush commented 1 year ago

Hi all, really sorry for the delay on my elaboration of this issue!

The context is that my team is working on Kubernetes reliability (as part of a product) and we want to understand the failure modes of the control plane. I had a chat with Han Kang about this offline, and I wanted to amend the details of this issue with our conversation of what I think is missing, but I wanted to review the links you all sent first to see if I was missing something. @sftim thank you very much for sending it over!

The part I wanted the most is the expectations of restrictions when one or more nodes of the control plane are down. We're currently working with a setup that considers HA as three control plane nodes, so we were trying to understand what were the consequences of:

  1. A single node being down
  2. The majority of nodes
  3. All of them (we assume cluster down, but just for completeness)

So what I was asking was "what Kubernetes customers can expect in case of failure of their control plane nodes".

Let me know if this makes sense, and sorry again for the delay

neolit123 commented 1 year ago

what you are talking about makes sense, @royalsflush

please include more detail in the OP post: https://github.com/kubernetes/website/issues/43849#issue-1981863546

i don't mind us including more documentation about failures and recovery of the CP, as the documentation is lacking. let's see what is actionable here.

/reopen

k8s-ci-robot commented 1 year ago

@neolit123: Reopened this issue.

In response to [this](https://github.com/kubernetes/website/issues/43849#issuecomment-1812116102): >what you are talking about makes sense, @royalsflush > >please include more detail in the OP post: >https://github.com/kubernetes/website/issues/43849#issue-1981863546 > >i don't mind us including more documentation about failures and recovery of the CP, as the documentation is lacking. >let's see what is actionable here. > >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
sftim commented 1 year ago

/sig architecture /sig api-machinery /remove-triage needs-information

Please revise (edit) the original issue description @royalsflush to explain what you want added to the documentation. You could write this as a user story or as a definition of done.

logicalhan commented 1 year ago

/assign

(I can take this, if y'all don't mind)

sftim commented 1 year ago

Thanks @logicalhan. These things are important.

sftim commented 1 year ago

/triage accepted /priority important-longterm

sftim commented 1 year ago

I would add that we ideally ought to cover some of the less common situations too. I'll outline some below. What I hope is that someone carefully reading the docs can answer what the expected outcome is, without actually setting up a cluster or reading any source code. “answer“ means working out if the expected behavior as seen by a client is: API usable; API unavailable / degraded; undefined behavior

Eg:

I'm sure we could think up more; maybe we even have a list already?


We can produce - and publish - docs without meeting this ideal; I've mentioned it so we understand where we'd like to end up.

logicalhan commented 1 year ago

I would add that we ideally ought to cover some of the less common situations too. I'll outline some below. What I hope is that someone carefully reading the docs can answer what the expected outcome is, without actually setting up a cluster or reading any source code. “answer“ means working out if the expected behavior as seen by a client is: API usable; API unavailable / degraded; undefined behavior

Eg:

  • three control plane nodes (1 per zone); separate etcd hosts (1 per zone); full failure in exactly one zone; “perfect” client-side load balancing and retries
  • three control plane nodes (1 per zone); separate etcd hosts (1 per zone); etcd healthy but full API server failure in exactly one zone; “perfect” client-side load balancing and retries
  • even number of control plane nodes, of which all are healthy; separate etcd cluster has odd number of nodes and some (but fewer than half) have failed; “perfect” client-side load balancing and retries
  • even number of control plane nodes, only half of which all are healthy; separate etcd cluster has odd number of nodes and some (but fewer than half) have failed; “perfect” client-side load balancing and retries
  • stacked 3-node control plane; each API server only speaks to local etcd; one etcd fully unavailable; “dumb” round-robin style load balancing without health checks

I'm sure we could think up more; maybe we even have a list already?

We can produce - and publish - docs without meeting this ideal; I've mentioned it so we understand where we'd like to end up.

Additional scenarios:

logicalhan commented 1 year ago

I may group answers based on local or remote etcd hosts, since the answers are likely skewed to that distinction anyway.

sftim commented 1 year ago

These questions need not appear in the page; you could think of them as like unit tests for the docs. In other words, if a reviewer picks a question, can they - just by reading what's in the page - work out what the answer must be?

(we could even ask a large language AI model to help us check)

logicalhan commented 1 year ago

These questions need not appear in the page; you could think of them as like unit tests for the docs. In other words, if a reviewer picks a question, can they - just by reading what's in the page - work out what the answer must be?

(we could even ask a large language AI model to help us check)

I dig the framing.

sftim commented 1 year ago

https://github.com/kubernetes/website/pull/43903 feels slightly relevant (only slightly, though). I don't know how much we want to also cover upgrades and how they impact failure modes.

neolit123 commented 1 year ago

+1 to cover upgrades and rollback.

in KEP PRRs we require "downgradability" of k8s features, but etcd by design does not support downgrade well, yet: https://github.com/etcd-io/etcd/issues/15878#issuecomment-1567986308

kubeadm as a whole also does not support downgrades. it supports rollback, in case of component failure, but that may or may not work, depending on:

43903 feels slightly relevant (only slightly, though). I don't know how much we want to also cover upgrades and how they impact failure modes.

it's a bug in kubeadm's api-machinery usage and the etcd upgrade failure will trigger a rollback, unless the user workarounds it. but since the rollback will restore an etcd with the same version, it will act as a component restart.

kumarankit999 commented 1 year ago

+1 @sftim , Can you reshare the docs for gaps

sftim commented 1 year ago

+1 @sftim , Can you reshare the docs for gaps

I don't understand what you'd like me to do here @kumarankit999. How would you know when I'd done what you're asking (can you frame it as a definition of done)?

If you mean https://github.com/kubernetes/website/issues/43849#issuecomment-1799444308, I was the person who asked the question, and I do not have the answer to it.

k8s-triage-robot commented 2 days ago

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted