kubernetes-sigs / vsphere-csi-driver

vSphere storage Container Storage Interface (CSI) plugin
https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/index.html
Apache License 2.0
296 stars 179 forks source link

Avoid access to vsphere API from Pods in the cluster #1742

Closed sathieu closed 1 year ago

sathieu commented 2 years ago

What happened:

When using vSphere Container Storage Plugin, a set of privileges is needed on vSphere.

In our environnement, the vSphere API access is restricted from trusted subnets, and the Kubernetes nodes are not in those subnets (even control plane nodes). We can add another trusted Kubernetes cluster in those restricted subnets with access to both the vSphere API and the Kubernetes API of the workload clusters

What you expected to happen:

Ability to move parts of the vSphere Container Storage Plugin in a management cluster, and ensure only this cluster needs access to the vSphere API.

How to reproduce it (as minimally and precisely as possible):

Install in a cluster without access to the vSPhere API -> it fails.

k8s-ci-robot commented 2 years ago

@sathieu: The label(s) /label feature cannot be applied. These labels are supported: api-review, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash, team/katacoda, refactor

In response to [this](https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1742#issuecomment-1120845561): >/label feature Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
divyenpatel commented 2 years ago

@sathieu Have you explored Tanzu Kubernetes Guest Cluster - https://cormachogan.com/2020/09/29/deploying-tanzu-kubernetes-guest-cluster-in-vsphere-with-tanzu/

CSI driver running in the Tanzu Guest Cluster does not make a connection to the vCenter server.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

omniproc commented 2 years ago

Possibly OAuth - if added fully in vSphere 8 - could at least remove the need to provide credentials. Also it would make it possible to couple access keys with a limited set

@sathieu Have you explored Tanzu Kubernetes Guest Cluster - https://cormachogan.com/2020/09/29/deploying-tanzu-kubernetes-guest-cluster-in-vsphere-with-tanzu/

CSI driver running in the Tanzu Guest Cluster does not make a connection to the vCenter server.

How's that possible? Doesn't the CSI driver require access to the API to create volumes and attach/detatch volumes to/from cluster nodes? I skimmed the article you linked but it doesn't really explain that topic.

We "solved" the issue by heavily limiting what those credentials can do so the impact is limited to "a cluster admin can destroy his own cluster" (which is always the case, regardless of the CSI). But the required communication from _all_workernodes to the vSphere API is still a burden we'd like to see gone from a security perspective.

Eventually the (hopefully) upcomming full OAuth integration in vSphere 8 might help with the credentials and privilege situation in the future.

sathieu commented 2 years ago

@omniproc According to Tanzu Kubernetes Grid Service Architecture (the schema is not good, but I don't know any better), workload clusters access the API thru the supervisor cluster. This is a bit better, but still the stream is transitively from workload cluster to vSphere API.

Also, this way of working is not open-source, or not documented enough to install on a vanillia cluster.

omniproc commented 2 years ago

@sathieu that's unfortunate and seems like a rather artificial limitation put on the OSS project.

I'm not sure if it's helpful but the source hints for the ImprovedVolumeTopology flag to do the following:

is the feature flag used to make the following improvements to topology feature: avoid taking in VC credentials in node daemonset.

However in my tests enableing this feature gate seems to do way more then just that and the docs are not really clear on what it does exactly. I guess it's linked to the topology aware setup, however normally that would be enabled using those flags, so that's kinda confusing and I didn't have time to play around with it or read the source to better understand the behaviour (my guess is that the args in the manifest are leftovers from previous versions and by now the gate is enabled using this ConfigMap)

The docs about the "topology aware" feature add even more confusion and tell the exact opposite:

By default, the vSphere Cloud Provider Interface and vSphere Container Storage Plug-in pods are scheduled on Kubernetes control plane nodes. For non-topology aware Kubernetes clusters, it is sufficient to provide the credentials of the control plane node to vCenter Server where this cluster is running. For topology-aware clusters, every Kubernetes node must discover its topology by communicating with vCenter Server. This is required to utilize the topology-aware provisioning and late binding feature.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1742#issuecomment-1304443935): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
sathieu commented 1 year ago

/reopen

Actually, this is already possible, I've implemented it in the helm chart https://github.com/vsphere-tmm/helm-charts/pull/50.

Still, I think some official docs is needed, reopening.

k8s-ci-robot commented 1 year ago

@sathieu: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1742#issuecomment-1304878783): >/reopen > >Actually, this is already possible, I've implemented it in the helm chart https://github.com/vsphere-tmm/helm-charts/pull/50. > >Still, I think some official docs is needed, reopening. Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
lipingxue commented 1 year ago

/assign @divyenpatel

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1742#issuecomment-1356497180): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.