kubernetes-retired / multi-tenancy

A working place for multi-tenancy related proposals and prototypes.
Apache License 2.0
954 stars 172 forks source link

Benchmark to block tenants from accessing other tenants' NFS PVs #1525

Closed mac-chaffee closed 2 years ago

mac-chaffee commented 2 years ago

If a cluster uses an NFS-based Container Storage Interface for persistent volumes, then a savvy user could read/write to another user's PersistentVolume if they knew the NFS server's IP and the mount path (guessable by checking the df output inside your own pod). A similar attack might be possible with certain iSCSI/block-based CSIs, but not sure.

Users can "mount" NFS servers without any special permissions/capabilities by using a user-space NFS client, so the only way to prevent this is to block network access from pods to that NFS server. One way of doing that is a Calico GlobalNetworkPolicy (see https://github.com/NetApp/trident/issues/638#issuecomment-1032801713), but a vanilla NetworkPolicy might work too.

The benchmark could perform a test like the following:

  1. Install a pod with a PVC in namespace A.
  2. Exec into the pod in namespace A and determine the storage medium (NFS, block, or something else). If NFS, save the server hostname and mountpath into a variable.
  3. Install an unprivileged pod in namespace B
  4. Try to connect to the NFS server (maybe with nc or something). Maybe if there's a good containerized nfs client, we could also try reading the PVC, but a ping test is probably a fine test for now.
k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

mac-chaffee commented 2 years ago

/remove-lifecycle rotten

I believe this is still a very important protection for tenants, but just haven't had the time to implement it

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/multi-tenancy/issues/1525#issuecomment-1304655860): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.