kubernetes-retired / cluster-api-provider-nested

Cluster API Provider for Nested Clusters
Apache License 2.0
301 stars 67 forks source link

Add Dedicated Node Support and Customized Scheduler in VirtualCluster using Customized Syncers #344

Closed weiling61 closed 7 months ago

weiling61 commented 1 year ago

User Story

As a user I would like to

Background info: The dedicated node is properly tainted so that it can only be accessed by one tenant virtual cluster. With dedicated node, a customized scheduler can be used in tenant virtual cluster to place pod onto to one of dedicated nodes.

Detailed Description

Supporting Dedicated Node in Virtual Cluster

Vnode uses “k8s.io/api/core/v1“ node object template. In current virtual cluster syncer. Vnode is created under a virtual cluster when there is a pod placed on corresponding physical node. The vnode will be garbage collected when last pod belongs to that virtual cluster is removed from corresponding physical node.

In order to support dedicated node in virtual cluster, the node should be presented in the virtual cluster permanently without participating vnode lifecycle management mentioned above. We propose to use label “tenancy.x-k8s.io/virtualnode: true” for current vnode sycning and vnode lifecycle management work. Vnode with label “tenancy.x-k8s.io/virtualnode: false” will skip the node syncing and lifecycle management.

We propose to use label “tenancy.x-k8s.io/virtualnode” to defining current node syncing boundary. User can implement a separate customized node-syncer to sync the dedicated nodes to virtual clusters while keeping current node syncer to handle standard node syncing work.

The change in current node syncer and pod syncer is very small: Label “tenancy.x-k8s.io/virtualnode” has been handled by node syncer today. We will just make sure the dedicated node will not be created in “nodeNameToCluster[nodeName][ClusterName]“ map and ”clusterVNodePodMap[clusterName][nodeName][PodUID]“ map.

More specifically, label “tenancy.x-k8s.io/virtualnode” check will be added at:

func (c *controller) updateClusterVNodePodMap(...) 

to prevent dedicated node with label “tenancy.x-k8s.io/virtualnode: false” being added into above maps.

Supporting Customized Scheduler in Virtual Cluster

Background: The idea of using customized scheduler in virtual cluster is to let the customized scheduler to assign a nodeName in virtual pod. The virtual pod with nodeName can be synced onto targeted physical node without going through scheduler in super cluster. The virtual pod with NodeName specified can be synced by a customized pod syncer while keeping current pod syncer do sync standard virtual pods.

Currently pod syncer has been implemented to skip all syncing when detecting a nodeName != “” in a virtual pod. However it will send out an error message.

We propose a special feature gate “skip_sync_pod_nodeName” to help current pod syncer to decide if the error message should be sent or not. This way, there will be no confusions.

christopherhein commented 1 year ago

Similar issue - https://github.com/kubernetes-sigs/cluster-api-provider-nested/issues/337

weiling61 commented 1 year ago

Not exactly similar to #337. The purpose of this proposal is to make sure current Pod syncer does not support nodeName != "" case, we propose to add nodeName != "" check in node life cycle management too. And make sure nodeName =="" and nodeName !="" are 2 mutually exclusive cases. This way, users can implement their own syncer to cover nodeName !="" case.

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 7 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-nested/issues/344#issuecomment-2005698874): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.