kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.59k stars 1.32k forks source link

Launch ControlePlane in existing cluster #7475

Open k82cn opened 2 years ago

k82cn commented 2 years ago

User Story

As a user/operator, I would like to launch cluster master in kubernetes; it'll avoid additional provider (only provider for the worker node) and save cost by container/pod.

Detailed Description

Currently, I'm using Metal3 provider to manage workers; but it's cost concern to use bare-metals for master and it's complex to set additional VMs, e.g. BYOH ( https://metal3.io/blog/2022/07/08/One_cluster_multiple_providers.html ). So I'd like to manager cluster master in current k8s cluster by introducing a new ControlPlane.

Anything else you would like to add:

Here's an implementation (only kube-apiserver) at https://github.com/openbce/kink , I'd like to work with community to move forward.

/kind feature

fabriziopandini commented 2 years ago

/triage accepted

there are probably different angles to consider, top of mind:

but overall, I like the idea of exploring this space but not sure how much time I can personally commit to it in the short term; however, If the goal of this issue is to work with the community probably the best way forward is to work on a google doc/proposal where to collect use cases and feedback from other users and bring this up at the community meeting as well

/remove-kind feauture /kind proposal

k8s-ci-robot commented 2 years ago

@fabriziopandini: Those labels are not set on the issue: kind/feauture

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/7475#issuecomment-1301301697): >/triage accepted > >there are probably different angles to consider, top of mind: >- writing a new CP provider is already a supported use case >- some providers already worked on similar ideas (CP deployed as a pod, like e.g. https://github.com/kubernetes-sigs/cluster-api-provider-nested). Can we join forces? >- one issue when decoupling CP from machines is networking, what kind of constraints are we envisioning here > >but overall, I like the idea of exploring this space but not sure how much time I can personally commit to it in the short term; however, If the goal of this issue is to work with the community probably the best way forward is to work on a google doc/proposal where to collect use cases and feedback from other users and bring this up at the community meeting as well > >/remove-kind feauture >/kind proposal Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
lentzi90 commented 2 years ago

Thanks for creating the issue! I'm adding some additional information and findings here.

For people interested in this, I recommend checking this slack thread where we are discussing next steps.

@richardcase and me started a draft proposal document for something similar. I think the goal was pretty much the same but we didn't explicitly say that the control plane should run in the management cluster. Because of this we also ended up thinking about how to combine multiple infrastructure providers in a single cluster (as in the blog post).

Control plane in the management cluster

Alternatives considered:

Mixed provider clusters

It may not always be desirable to have the control planes of the workload clusters in the management cluster. But it could still be useful to have e.g. virtualized control plane nodes and bare metal workers.

Main alternatives (see the draft proposal for details):

k82cn commented 2 years ago

Thanks for your inputs; this requirements is more about a new control plane. For mixed provider, I'd like to contribute :)

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

vaibhav2107 commented 1 year ago

/remove-lifecycle stale

jstallenberger commented 9 months ago

@lentzi90 @richardcase Thank you for your proposal regarding mixed providers! Let's say in case of having a virtualized controlplane and bare metal workers, what option would you suggest? Is it safe to use the mentioned blog post in this case, e.g. with the vSphere and BYOH providers? Also, do you think that converting the Cluster's infrastructureRef into a list is something which could be implemented in the future?

lentzi90 commented 9 months ago

What I wrote about in the blog post is basically a hack so I would not rely on that for production. Things have progressed for Kamaji though, so I think that is your best way forward for now. I have an example of combining Metal3 and Kamaji if you want to try.

Regarding infrastructureRef, I'm not aware of any progress in this direction unfortunately.

fabriziopandini commented 7 months ago

/priority backlog