Closed sttts closed 1 year ago
Excited :tada: to see this discussion starting. Let's think about how to further conventions, synergies and common patterns across Kubernetes-like control plane APIs.
IMO kcp complements CNCF graduated/incubating projects like Crossplane, ArgoCD, KubeVela, Operator Framework, and OpenKruise in providing ways to leverage K8s APIs (CRDs) for all software-defined infrastructure; as well as sandbox projects like cdk8s, KUDO, and PipeCD and other open source projects like Kratix.
Here's a presentation from me on this topic: Kubernetes is your cloud control plane
Huge +1.
Big UP and +10000
2. Logical Clusters for sharing of a kcp instance (backed by one etcd cluster) to offer many cluster-like, independent API surfaces, with a logical cluster cost similar to a namespace in Kubernetes.
Will kcp supports a master-slave topology for dual sites deployment or a distributed replication mechanism as proposed by CockroachDB to avoid that it becomes a point of failure as by default it is only backed by one etcd datastore ?
For the record -- kcp will be presenting at the next TAG Runtime meeting on 3rd August.
cc @raravena80 @helayoty
Thank you for letting us know @nikhita! I hope to attend.
Folks in TAG App Delivery and WG Platforms are interested in how kcp will enable platform engineers to publish and bind APIs from one "producer" cluster in another "consumer" cluster without requiring an implementing controller for the API in the consuming cluster. This promises to enable us to scale the operator pattern better for multicluster fleets by letting us run operators for custom resource types in just one "platform" cluster.
Will kcp supports a master-slave topology for dual sites deployment or a distributed replication mechanism as proposed by CockroachDB to avoid that it becomes a point of failure as by default it is only backed by one etcd datastore ?
@cmoulliard kcp has the following dimensions:
tl/dr: etcd-like consistency locally in every logical cluster, eventual consistency and resilience cross logical clusters (on shards).
/vote-sandbox
@amye has called for a vote on [Sandbox] kcp
(#47).
The members of the following teams have binding votes: | Team |
---|---|
@cncf/cncf-toc |
Non-binding votes are also appreciated as a sign of support!
You can cast your vote by reacting to this
comment. The following reactions are supported:
In favor | Against | Abstain |
---|---|---|
👍 | 👎 | 👀 |
Please note that voting for multiple options is not allowed and those votes won't be counted.
The vote will be open for 7days
. It will pass if at least 66%
of the users with binding votes vote In favor 👍
. Once it's closed, results will be published here as a new comment.
/check-vote
So far 63.64%
of the users with binding vote are in favor (passing threshold: 66%
).
In favor | Against | Abstain | Not voted |
---|---|---|---|
7 | 0 | 0 | 4 |
User | Vote | Timestamp |
---|---|---|
rochaporto | In favor | 2023-09-13 6:40:53.0 +00:00:00 |
TheFoxAtWork | In favor | 2023-09-12 16:00:29.0 +00:00:00 |
mauilion | In favor | 2023-09-13 20:25:04.0 +00:00:00 |
kgamanji | In favor | 2023-09-12 16:40:19.0 +00:00:00 |
RichiH | In favor | 2023-09-12 23:39:49.0 +00:00:00 |
justincormack | In favor | 2023-09-13 16:04:29.0 +00:00:00 |
nikhita | In favor | 2023-09-14 7:55:10.0 +00:00:00 |
@mattfarina | Pending | |
@dzolotusky | Pending | |
@cathyhongzhang | Pending | |
@erinaboyd | Pending |
/check-vote
So far 72.73%
of the users with binding vote are in favor (passing threshold: 66%
).
In favor | Against | Abstain | Not voted |
---|---|---|---|
8 | 0 | 0 | 3 |
User | Vote | Timestamp |
---|---|---|
mauilion | In favor | 2023-09-13 20:25:04.0 +00:00:00 |
kgamanji | In favor | 2023-09-12 16:40:19.0 +00:00:00 |
justincormack | In favor | 2023-09-13 16:04:29.0 +00:00:00 |
rochaporto | In favor | 2023-09-13 6:40:53.0 +00:00:00 |
RichiH | In favor | 2023-09-12 23:39:49.0 +00:00:00 |
TheFoxAtWork | In favor | 2023-09-12 16:00:29.0 +00:00:00 |
nikhita | In favor | 2023-09-14 7:55:10.0 +00:00:00 |
dzolotusky | In favor | 2023-09-15 13:52:26.0 +00:00:00 |
@mattfarina | Pending | |
@cathyhongzhang | Pending | |
@erinaboyd | Pending |
/check-vote
So far 72.73%
of the users with binding vote are in favor (passing threshold: 66%
).
In favor | Against | Abstain | Not voted |
---|---|---|---|
8 | 0 | 0 | 3 |
User | Vote | Timestamp |
---|---|---|
rochaporto | In favor | 2023-09-13 6:40:53.0 +00:00:00 |
TheFoxAtWork | In favor | 2023-09-12 16:00:29.0 +00:00:00 |
dzolotusky | In favor | 2023-09-15 13:52:26.0 +00:00:00 |
RichiH | In favor | 2023-09-12 23:39:49.0 +00:00:00 |
nikhita | In favor | 2023-09-14 7:55:10.0 +00:00:00 |
kgamanji | In favor | 2023-09-12 16:40:19.0 +00:00:00 |
mauilion | In favor | 2023-09-13 20:25:04.0 +00:00:00 |
justincormack | In favor | 2023-09-13 16:04:29.0 +00:00:00 |
@mattfarina | Pending | |
@cathyhongzhang | Pending | |
@erinaboyd | Pending |
The vote passed! 🎉
81.82%
of the users with binding vote were in favor (passing threshold: 66%
).
In favor | Against | Abstain | Not voted |
---|---|---|---|
9 | 0 | 0 | 2 |
User | Vote | Timestamp |
---|---|---|
@kgamanji | In favor | 2023-09-12 16:40:19.0 +00:00:00 |
@RichiH | In favor | 2023-09-12 23:39:49.0 +00:00:00 |
@mauilion | In favor | 2023-09-13 20:25:04.0 +00:00:00 |
@dzolotusky | In favor | 2023-09-15 13:52:26.0 +00:00:00 |
@nikhita | In favor | 2023-09-14 7:55:10.0 +00:00:00 |
@cathyhongzhang | In favor | 2023-09-18 17:06:00.0 +00:00:00 |
@TheFoxAtWork | In favor | 2023-09-12 16:00:29.0 +00:00:00 |
@justincormack | In favor | 2023-09-13 16:04:29.0 +00:00:00 |
@rochaporto | In favor | 2023-09-13 6:40:53.0 +00:00:00 |
Hi @sttts ! Welcome aboard! We're very excited to get you onboarded as a CNCF sandbox project! Here's the link to your onboarding checklist: https://github.com/cncf/sandbox/issues/195
Here you can communicate any questions or concerns you might have. Please don't hesitate to reach out, I am always happy to help!
Application contact emails
Sponsoring Orgs
andy@clubanderson.com (IBM) sebastian@kubermatic.com stefan.schimanski@upbound.io nraghunath@vmware.com mangirdas@cast.ai
Contributing Orgs
eboyd@redhat.com
Champions
maximilian.braun@sap.com vasu.chandrasekhara@sap.com nraghunath@vmware.com
Project Summary
Kubernetes-like control planes for form-factors and use-cases beyond Kubernetes and container workloads
Project Description
Control planes are a common pattern in distributed systems within cloud computing. Not only container orchestration, but also application management, network & storage infrastructure, edge device management and Internet of Things (IoT), all have their use of control planes.
The kcp project is dedicated to building a generic basis for scalable control planes beyond container orchestration, maintaining 100% compatibility with (1) Kubernetes API Machinery, (2) non-domain-specific Kubernetes APIs and (3) the Kubernetes ecosystem of libraries and tooling.
The kcp project exists in order to broaden the applicability of cloud native technology and standards to new use-cases, making best use of existing knowledge, tooling and allowing interoperability. While doing that, this should strengthen the existing technology of Kubernetes, both by contributing back and by keeping the community united and increasing its reach.
From a 30,000 feet view, kcp virtualizes control plane infrastructure in order to optimize resource utilization by a magnitude or more for certain use-cases and isolation requirements, similarly as what containers have done to virtual machines but applied to a different domain. Looking at smaller units lifts up the concept of control planes conceptually and decouples them from the infrastructure beneath. Like with containers, technological space for innovation opens up between these smaller control planes, complementing that within.
kcp provides a generalized Kubernetes API-Machinery based apiserver with the following properties:
kcp uses the vast majority of the Kubernetes code-base 1:1 without changes, and extends it in a few strategic places within k8s.io/apiserver in order to implement logical clusters, while staying 100% compatible on the outside. kcp follows Kubernetes releases tightly by bumping it as dependency on a regular schedule.
The following manifesto is to be called out (part of the KCP Project Governance) as it fundamentally guides the kcp project, both behavioral and technical:
Org repo URL (provide if all repos under the org are in scope of the application)
https://github.com/kcp-dev
Project repo URL in scope of application
https://github.com/kcp-dev/kcp
Additional repos in scope of the application
apiextensions-apiserver – convenience logical-cluster-aware clients for CRDs apimachinery – API machinery library for logical-cluster-aware code client-go – convenience logical-cluster-aware clients for Kubernetes APIs code-generator – convenience logical-cluster-aware client generator controller-runtime – multi-workspace capable adaption of controller-runtime controller-runtime-example – example with multi-workspace controllers enhancements – Enhancements tracking repo helm-charts – Helm chart repo for KCP infra – CI configuration logicalcluster – API machinery library for logicalcluster datatype kubernetes – branch of Kubernetes plus minimal modifications to add logical clusters, rebased regularly – and greatly simplified with KEP-4088 kcp.io – landing page of kcp.io kcp-dev.github.io – documentation redirect webpage contrib-glbc – experimental add-on (not part of core): global ingress controller contrib-tmc – experimental add-on (not part of core): transparent multi-cluster compute
Website URL
https://kcp.io
Roadmap
Immediate goals:
Medium/Long term goals:
Work with the communities (along the lines of KEP-4080) to minimize and upstream as much as possible of github.com/kcp-dev/kubernetes and github.com/kcp-dev/controller-runtime, ideally eliminating their need.
Roadmap context
No response
Contributing Guide
https://github.com/kcp-dev/kcp/blob/main/CONTRIBUTING.md
Code of Conduct (CoC)
https://github.com/kcp-dev/kcp/blob/main/code-of-conduct.md
Adopters
IBM’s KubeStellar in https://github.com/kubestellar/kubestellar (https://kubestellar.io/) Kubermatic with Kubernetes API based management in their product Upbound with Crossplane as a natural consumer of a generic control plane technology SAP prototyping a successor of node-/workerless clusters in Gardener.
Contributing or Sponsoring Org
Contributing Org: Red Hat. Sponsoring Orgs: IBM, Kubermatic, Upbound, Vmware.
Maintainers file
https://github.com/kcp-dev/kcp/blob/main/OWNERS
IP Policy
Trademark and accounts
Why CNCF?
The core concepts of kcp for building generic scalable control planes on-top of it are in place. The project fills a gap in the CNCF landscape of providing a basis for non-container-workload control planes using unified technology in use-cases where Kubernetes does not fit due to its form-factor. Unified technology fosters collaboration and innovation, and with that makes the Kubernetes technology and CNCF ecosystem stronger.
The project strives to be an excellent Kubernetes community citizen, extending the use-cases of the Kubernetes apiserver infrastructure.
Further adoption needs a strong governance model. Contributors and potential adopters have repeatedly stated the need for a vendor-neutral space hosting the kcp project. For a project so heavily based on Kubernetes and seeing itself as a member of the Kubernetes community, CNCF is the natural choice.
Benefit to the Landscape
kcp extends the use-cases of Kubernetes to non-container-workload scalable control planes. It makes heavy use of the apiserver infrastructure of Kubernetes, strives to contribute back to it where possible, and strives to stay 100% compatible with Kubernetes-based tooling. With that it will
Cloud Native 'Fit'
kcp is built on Kubernetes apiserver technology, by core developers of the Kubernetes community, and it offers 100% compatible APIs to the users of kcp based control planes. kcp is built by members of the Kubernetes community for the Kubernetes community.
The extensibility of the Kubernetes API via Custom Resource Definitions (CRDs) is one of the most crucial selling points of Kubernetes. It has led to a "Cambrian explosion" of workloads using containers, alongside domain specific operators, leveraging Kubernetes’s declarative management design. Similarly, kcp can be thought of as a decisive “extension” technology to increase Kubernetes’s coverage towards the entire set of use cases that spawn the cloud, that go beyond containers and clusters and which need a hierarchy of workspaces backed by (non-opinionated) control planes.
Cloud Native 'Integration'
A key kcp principle is to not diverge from interfaces and standards established through Kubernetes and its APIs, with the goal to allow interoperability with existing cloud native tooling:
kcp is built on Kubernetes apiserver technology with 100% API compatibility for shared APIs, allowing the use of other CNCF projects seamlessly, both on the code level (e.g. controller-runtime) and on the tool level (e.g. GitOps tools, CLIs, UIs, etc.). A kcp API endpoint is indistinguishable from a Kubernetes cluster, other than in the set of offered APIs and the server version string, making all (non-container) CNCF projects work out of the box.
Cloud Native Overlap
kcp obviously (and intentionally) overlaps with the Kubernetes apiserver. But in contrast, it provides another form factor of the same technology but suitable for large-scale control planes with other isolation requirements than Kubernetes itself, and in contexts where the pure existence of the container-workload APIs is a blocker for the use-case.
With KubeStellar, a kcp-related project is in discussion to be submitted to CNCF sandbox. KubeStellar originated as a subproject in the kcp-dev Github organization with the goal to build an edge workload specific compute API solution on-top of kcp workspaces. Project-wise and scope-wise KubeStellar is independent and higher up the stack. It has its own governance, its own releases and its own community infrastructure (mailing lists etc.).
Similar projects
None comparable.
There are attempts (e.g. https://github.com/acorn-io/mink and https://github.com/opencontrolplane) to broaden the use-cases of Kubernetes-like APIs, but under heavy compromises of how complete Kubernetes API Machinery behavior is implemented. Their approach is to reimplement a large part of the Kubernetes apiserver such that kubectl and some generic tools like GitOps work. There is a risk of ecosystem fragmentation by only implementing a subset of a conformant API surface and by diverting in implementation.
Both mentioned projects also have a different goal: they are meant to offer a Kubernetes-like API for a SaaS service, but they are not meant to use the controller pattern of Kubernetes to implement their control plane natively. kcp in contrast is focussing on the control plane part and not on a shallow Kubernetes-like API wrapper. Both solutions have their space, but they are fundamentally different in design and problem space.
Another overlap exists with vCluster from the user point of view: vCluster allows to run small Kubernetes clusters on-top of a bigger cluster, sharing the compute of the latter. kcp and vCluster differ in the isolation model, where vCluster opts for higher isolation, but in return has a two (!) magnitudes higher resource consumption compared to a kcp workspace. With that the solution domain is fundamentally different.
On a similar note, providers that already offer managed Kubernetes aaS register a clear and increasing demand from sophisticated users/product teams that want to utilize only the Kubernetes API/control plane (but as fully managed service). kcp not only replaces the prevalent stopgap solutions, such as vCluster or node-/worker-less clusters, but kcp can be the correct logical environment to offer higher-level management constructs (such as inheritance in the hierarchical tree of otherwise isolated workspaces/control planes) that become evident with proliferation of the stopgap.
An even bigger difference exists with kubevirt, which on the 20,000 feet view can be used to virtualize control planes. But kubevirt neither gives generic control planes, nor are they comparable from the resource utilization point of view, with virtual machines multiple magnitudes in resource consumption apart from workspaces, even more than with vCluster.
Business Product or Service to Project separation
n/a
Project presentations
Kcp: Towards 1,000,000 Clusters, Name^WWorkspaced CRDs - Stefan Schimanski Sponsored Keynote: Kubernetes as the Control Plane for the Hybrid Cloud - Clayton Coleman Turbonomic and KubeStellar - Andy Anderson, Jun Duan, Cheuk Lam
Project champions
Maximilian Braun (SAP) Vasu Chandrasekhara (SAP) Nikhita Raghunath (VMware and CNCF TOC member)
Additional information
No response