Closed weiling61 closed 7 months ago
Not exactly similar to #337. The purpose of this proposal is to make sure current Pod syncer does not support nodeName != "" case, we propose to add nodeName != "" check in node life cycle management too. And make sure nodeName =="" and nodeName !="" are 2 mutually exclusive cases. This way, users can implement their own syncer to cover nodeName !="" case.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
User Story
As a user I would like to
Background info: The dedicated node is properly tainted so that it can only be accessed by one tenant virtual cluster. With dedicated node, a customized scheduler can be used in tenant virtual cluster to place pod onto to one of dedicated nodes.
Detailed Description
Supporting Dedicated Node in Virtual Cluster
Vnode uses “k8s.io/api/core/v1“ node object template. In current virtual cluster syncer. Vnode is created under a virtual cluster when there is a pod placed on corresponding physical node. The vnode will be garbage collected when last pod belongs to that virtual cluster is removed from corresponding physical node.
In order to support dedicated node in virtual cluster, the node should be presented in the virtual cluster permanently without participating vnode lifecycle management mentioned above. We propose to use label “tenancy.x-k8s.io/virtualnode: true” for current vnode sycning and vnode lifecycle management work. Vnode with label “tenancy.x-k8s.io/virtualnode: false” will skip the node syncing and lifecycle management.
We propose to use label “tenancy.x-k8s.io/virtualnode” to defining current node syncing boundary. User can implement a separate customized node-syncer to sync the dedicated nodes to virtual clusters while keeping current node syncer to handle standard node syncing work.
The change in current node syncer and pod syncer is very small: Label “tenancy.x-k8s.io/virtualnode” has been handled by node syncer today. We will just make sure the dedicated node will not be created in “nodeNameToCluster[nodeName][ClusterName]“ map and ”clusterVNodePodMap[clusterName][nodeName][PodUID]“ map.
More specifically, label “tenancy.x-k8s.io/virtualnode” check will be added at:
to prevent dedicated node with label “tenancy.x-k8s.io/virtualnode: false” being added into above maps.
Supporting Customized Scheduler in Virtual Cluster
Background: The idea of using customized scheduler in virtual cluster is to let the customized scheduler to assign a nodeName in virtual pod. The virtual pod with nodeName can be synced onto targeted physical node without going through scheduler in super cluster. The virtual pod with NodeName specified can be synced by a customized pod syncer while keeping current pod syncer do sync standard virtual pods.
Currently pod syncer has been implemented to skip all syncing when detecting a nodeName != “” in a virtual pod. However it will send out an error message.
We propose a special feature gate “skip_sync_pod_nodeName” to help current pod syncer to decide if the error message should be sent or not. This way, there will be no confusions.