Open colearendt opened 2 years ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/kind feature
/lifecycle frozen
@jacobwolfaws Any news why it has been moved to 'frozen'?
@pat-s Moving it to frozen so people don't have to keep removing the lifecycle stale tag. No updates on this particular feature
@colearendt I'm facing similar problem, where I want to use 1 FSx volume (1200GiB) in multiple namespaces, with needs multiple PV and PVCs. How would using two PVs with same mountpath
work in this case? (seperate tenants by Group/User)
Is your feature request related to a problem?/Why is this needed
FSx Lustre filesystems are large (minimum 1.2T). As a result, it would be nice to be able to couple related applications within a particular FSx Lustre filesystem.
/feature
Describe the solution you'd like in detail
For instance:
Describe alternatives you've considered
mountname
s (mount sub-paths)Additional context
This feature is prompted by the
nfs-subdir-external-provisioner
's behavior (although nfs-server-provisioner works much the same way too).