Closed sathieu closed 1 year ago
@sathieu: The label(s) /label feature
cannot be applied. These labels are supported: api-review, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash, team/katacoda, refactor
@sathieu Have you explored Tanzu Kubernetes Guest Cluster - https://cormachogan.com/2020/09/29/deploying-tanzu-kubernetes-guest-cluster-in-vsphere-with-tanzu/
CSI driver running in the Tanzu Guest Cluster does not make a connection to the vCenter server.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Possibly OAuth - if added fully in vSphere 8 - could at least remove the need to provide credentials. Also it would make it possible to couple access keys with a limited set
@sathieu Have you explored Tanzu Kubernetes Guest Cluster - https://cormachogan.com/2020/09/29/deploying-tanzu-kubernetes-guest-cluster-in-vsphere-with-tanzu/
CSI driver running in the Tanzu Guest Cluster does not make a connection to the vCenter server.
How's that possible? Doesn't the CSI driver require access to the API to create volumes and attach/detatch volumes to/from cluster nodes? I skimmed the article you linked but it doesn't really explain that topic.
We "solved" the issue by heavily limiting what those credentials can do so the impact is limited to "a cluster admin can destroy his own cluster" (which is always the case, regardless of the CSI). But the required communication from _all_workernodes to the vSphere API is still a burden we'd like to see gone from a security perspective.
Eventually the (hopefully) upcomming full OAuth integration in vSphere 8 might help with the credentials and privilege situation in the future.
@omniproc According to Tanzu Kubernetes Grid Service Architecture (the schema is not good, but I don't know any better), workload clusters access the API thru the supervisor cluster. This is a bit better, but still the stream is transitively from workload cluster to vSphere API.
Also, this way of working is not open-source, or not documented enough to install on a vanillia cluster.
@sathieu that's unfortunate and seems like a rather artificial limitation put on the OSS project.
I'm not sure if it's helpful but the source hints for the ImprovedVolumeTopology
flag to do the following:
is the feature flag used to make the following improvements to topology feature: avoid taking in VC credentials in node daemonset.
However in my tests enableing this feature gate seems to do way more then just that and the docs are not really clear on what it does exactly. I guess it's linked to the topology aware setup, however normally that would be enabled using those flags, so that's kinda confusing and I didn't have time to play around with it or read the source to better understand the behaviour (my guess is that the args in the manifest are leftovers from previous versions and by now the gate is enabled using this ConfigMap)
The docs about the "topology aware" feature add even more confusion and tell the exact opposite:
By default, the vSphere Cloud Provider Interface and vSphere Container Storage Plug-in pods are scheduled on Kubernetes control plane nodes. For non-topology aware Kubernetes clusters, it is sufficient to provide the credentials of the control plane node to vCenter Server where this cluster is running. For topology-aware clusters, every Kubernetes node must discover its topology by communicating with vCenter Server. This is required to utilize the topology-aware provisioning and late binding feature.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
Actually, this is already possible, I've implemented it in the helm chart https://github.com/vsphere-tmm/helm-charts/pull/50.
Still, I think some official docs is needed, reopening.
@sathieu: Reopened this issue.
/assign @divyenpatel
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened:
When using vSphere Container Storage Plugin, a set of privileges is needed on vSphere.
In our environnement, the vSphere API access is restricted from trusted subnets, and the Kubernetes nodes are not in those subnets (even control plane nodes). We can add another trusted Kubernetes cluster in those restricted subnets with access to both the vSphere API and the Kubernetes API of the workload clusters
What you expected to happen:
Ability to move parts of the vSphere Container Storage Plugin in a management cluster, and ensure only this cluster needs access to the vSphere API.
How to reproduce it (as minimally and precisely as possible):
Install in a cluster without access to the vSPhere API -> it fails.