Open k82cn opened 2 years ago
/triage accepted
there are probably different angles to consider, top of mind:
but overall, I like the idea of exploring this space but not sure how much time I can personally commit to it in the short term; however, If the goal of this issue is to work with the community probably the best way forward is to work on a google doc/proposal where to collect use cases and feedback from other users and bring this up at the community meeting as well
/remove-kind feauture /kind proposal
@fabriziopandini: Those labels are not set on the issue: kind/feauture
Thanks for creating the issue! I'm adding some additional information and findings here.
For people interested in this, I recommend checking this slack thread where we are discussing next steps.
@richardcase and me started a draft proposal document for something similar. I think the goal was pretty much the same but we didn't explicitly say that the control plane should run in the management cluster. Because of this we also ended up thinking about how to combine multiple infrastructure providers in a single cluster (as in the blog post).
Alternatives considered:
It may not always be desirable to have the control planes of the workload clusters in the management cluster. But it could still be useful to have e.g. virtualized control plane nodes and bare metal workers.
Main alternatives (see the draft proposal for details):
Thanks for your inputs; this requirements is more about a new control plane. For mixed provider, I'd like to contribute :)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@lentzi90 @richardcase
Thank you for your proposal regarding mixed providers!
Let's say in case of having a virtualized controlplane and bare metal workers, what option would you suggest?
Is it safe to use the mentioned blog post in this case, e.g. with the vSphere and BYOH providers?
Also, do you think that converting the Cluster's infrastructureRef
into a list is something which could be implemented in the future?
What I wrote about in the blog post is basically a hack so I would not rely on that for production. Things have progressed for Kamaji though, so I think that is your best way forward for now. I have an example of combining Metal3 and Kamaji if you want to try.
Regarding infrastructureRef
, I'm not aware of any progress in this direction unfortunately.
/priority backlog
User Story
As a user/operator, I would like to launch cluster master in kubernetes; it'll avoid additional provider (only provider for the worker node) and save cost by container/pod.
Detailed Description
Currently, I'm using Metal3 provider to manage workers; but it's cost concern to use bare-metals for master and it's complex to set additional VMs, e.g. BYOH ( https://metal3.io/blog/2022/07/08/One_cluster_multiple_providers.html ). So I'd like to manager cluster master in current k8s cluster by introducing a new ControlPlane.
Anything else you would like to add:
Here's an implementation (only kube-apiserver) at https://github.com/openbce/kink , I'd like to work with community to move forward.
/kind feature