loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.93k stars 428 forks source link

Failed to cordon node in virtual cluster #2121

Closed fishingfly closed 2 months ago

fishingfly commented 2 months ago

What happened?

When I use this command to cordon node in k8s cluster which created by vCluster.

kubectl cordon node <node-name>

the status of node is as follows: image

What did you expect to happen?

Node can be cordoned in cluster

How can we reproduce it (as minimally and precisely as possible)?

In virtual cluster, use this command to cordon node:

kubectl cordon node <node-name>

At the same time, you need to watch node status by this command:

kubectl get node -w

Anything else we need to know?

I see the code https://github.com/loft-sh/vcluster/blob/v0.19.6/pkg/controllers/resources/nodes/translate.go#L25, translateUpdateBackwards func will update virtual node spec by host node spec.

Host cluster Kubernetes version

```console $ kubectl version # paste output here ```

vcluster version

0.19.6

VCluster Config

``` # My vcluster.yaml / values.yaml here ```
yeahdongcn commented 2 months ago

We've encountered the same issue (by accident). Just wondering if this is by design or if it needs to be fixed.

FabianKramm commented 2 months ago

Hey @yeahdongcn @fishingfly ! Thanks for reporting this issue and sorry for the late respone! That is actually expected behaviour for the default configuration as nodes shouldn't be changed within the vCluster as that would break isolation. There are however 2 ways that should achieve what you want to do:

  1. Enable the virtual scheduler which allows you to add labels, taints and other changes to the virtual nodes and prevents pods from getting scheduled there. You can enable that via controlPlane.advanced.virtualScheduler.enabled=true (https://vcluster.com/docs/vcluster/configure/vcluster-yaml/control-plane/other/advanced/virtual-scheduler), which has the advantage that cordoning a node within the virtual cluster won't mark it unschedulable in the host cluster, so pods on the host are unaffected by this and only virtual pods will not be scheduled.
  2. Allow syncing back node changes via sync.fromHost.nodes.syncBackChanges=true (https://vcluster.com/docs/vcluster/next/configure/vcluster-yaml/sync/from-host/nodes#sync-real-nodes-and-sync-back-labels-and-taints), which will sync taints and labels to the host cluster. This means that if you cordon a node within the virtual cluster it will get unschedulable for everyone including pods on the host cluster.
yeahdongcn commented 2 months ago

Thank you for the detailed explanation. Option 1 seems like the better choice for us since it won’t impact the host cluster.