Closed markturansky closed 3 weeks ago
I think this deserves a more generic discussion: how to feed volume plugin mounters more options. There are NFS mount time options (including security type, uid, gid, soft/hard, nfs version, fscache, etc) that we want to pass to the NFS mounter. Similarly, we want to pass options into Glusterfs mounter.
The Kerberos discussion could have implications on minion configuration. Minions have to join krb5 domain, and is thus not multi-tenancy friendly.
This could be potential use case for https://github.com/kubernetes/kubernetes/issues/13138. We containerize NFS mounter in a krb5-enabled container.
Can we init and join multiple kerberos domains on the host in any way? For instance, joining as multiple UIDs in different sessions?
On Tue, Aug 25, 2015 at 11:16 AM, Huamin Chen notifications@github.com wrote:
I think this deserves a more generic discussion: how to feed volume plugin mounters more options. There are NFS mount time options (including security type, uid, gid, soft/hard, nfs version, fscache, etc) that we want to pass to the NFS mounter. Similarly, we want to pass options into Glusterfs mounter.
The Kerberos discussion could have implications on minion configuration. Minions have to join krb5 domain, and is thus not multi-tenancy friendly.
This could be potential use case for #13138 https://github.com/kubernetes/kubernetes/issues/13138. We containerize NFS mounter in a krb5-enabled container.
— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134599933 .
Clayton Coleman | Lead Engineer, OpenShift
good point, yes, it is possible configure multiple kerberos realms, still I am not sure how to avoid UID collision if same UIDs exist in different kerbors realms.
This multi domain issue seems also raised in this nfsv4 draft:
Multi-domain capable sites need to meet the following requirements in
order to ensure that NFSv4 clients and servers can map between
name@domain and internal representations reliably. While some of
these constraints are basic assumptions in NFSv4.0 [RFC7530] and
NFSv4.1 [RFC5661], they need to be clearly stated for the multi-
domain case.
Adamson & Williams Expires February 15, 2016 [Page 8]
Internet-Draft Multi NFSv4 Domain August 2015
o The NFSv4 domain portion of name@domain MUST be unique within the
multi-domain namespace. See [RFC5661] section 5.9 "Interpreting
owner and owner_group" for a discussion on NFSv4 domain
configuration.
o The name portion of name@domain MUST be unique within the
specified NFSv4 domain.
Due to UID and GID collisions, stringified UID/GIDs MUST NOT be used
in a multi-domain deployment. This means that multi-domain-capable
servers MUST reject requests that use stringified UID/GIDs.
@markturansky What does it mean to 'deliver a keytab to the node' ?
Secret reference in the volume definition, just like Gluster and Ceph?
On Aug 25, 2015, at 1:16 PM, Paul Morie notifications@github.com wrote:
@markturansky https://github.com/markturansky What does it mean to 'deliver a keytab to the node' ?
— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134656700 .
I prefer injecting a secret into a volume, because that can also work with persistent volumes. Kubelet can look up the secret and inject it securely into the volume.
That's what I did in this PR: https://github.com/kubernetes/kubernetes/pull/7150 but that was crudely applied to all volumes. We can instead make an AuthenticatableVolumePlugin interface and implement as supported by plugins. A new interface follows the pattern in volumes where narrow interfaces do the work and plugins can implement them individually.
Distributing and synchronizing keytab in a large scale environment is scary. I wouldn't be surprised if people prefer some sorts of key management like KDC/AD.
Hey, I have to set the credentials for the nfs share. I'm hosting kubernetes on Azure and I created a file share container storage that can be mounted as nfs. But to do this I have to set some additional parameters like username and password
There are two models at play:
This issue is about solving #1 - it should be possible to NFS mount without requiring additional host infrastructure (like kerberos) if you have the appropriate credentials to do so. This is a delegation of connection authority from the caller to the node that mounts the volume. The node in this case has no implicit authority to kinit as anyone.
A KDC/AD infrastructure will likely associate identity (carried by the pod) to a kinit on the node that runs the pod. However, the node will have to have authority to kinit as whatever user carries identity anyway, so that's just a different path from above.
Both are valid, different use cases. Daniel's use case is the former and something I consider relevant for cloud environments.
On Wed, Aug 26, 2015 at 4:36 AM, Daniel notifications@github.com wrote:
Hey, I have to set the credentials for the nfs share. I'm hosting kubernetes on Azure and I created a file share container storage that can be mounted as nfs. But to do this I have to set some additional parameters like username and password
— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134878452 .
Clayton Coleman | Lead Engineer, OpenShift
We'd need to support two types of secrets - user/pass, and key tab. The simplest possible is user/pass at this point.
On Wed, Aug 26, 2015 at 3:35 PM, Clayton Coleman ccoleman@redhat.com wrote:
There are two models at play:
- Be able to use a specific NFS mount given all of the credentials necessary to authenticate.
- Have a fully implemented KDC/AD infrastructure.
This issue is about solving #1 - it should be possible to NFS mount without requiring additional host infrastructure (like kerberos) if you have the appropriate credentials to do so. This is a delegation of connection authority from the caller to the node that mounts the volume. The node in this case has no implicit authority to kinit as anyone.
A KDC/AD infrastructure will likely associate identity (carried by the pod) to a kinit on the node that runs the pod. However, the node will have to have authority to kinit as whatever user carries identity anyway, so that's just a different path from above.
Both are valid, different use cases. Daniel's use case is the former and something I consider relevant for cloud environments.
On Wed, Aug 26, 2015 at 4:36 AM, Daniel notifications@github.com wrote:
Hey, I have to set the credentials for the nfs share. I'm hosting kubernetes on Azure and I created a file share container storage that can be mounted as nfs. But to do this I have to set some additional parameters like username and password
— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134878452 .
Clayton Coleman | Lead Engineer, OpenShift
Clayton Coleman | Lead Engineer, OpenShift
Any update here? I'm also looking for a way to pass uid/gid to the nfs mounter.
@florix Running the mount as a specific uid/gid should probably a separate issue, there's a fair amount of extra backage with those like policy on what uid/gids a user can specify.
I'm confused a little bit by this discussion. Is it possible for Kubernetes to mount NFS volumes with krb5p?
Best, Evan
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Is there any kind of workaround for this anyone has found? In a multi-tenant cluster separated by namespace, this is an issue for mounting external NFS into pods on the cluster. Having all namespaces be able to mount each others external mounts is a security issue.
Thanks, John
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Bump this issue.
Any chance to get it done?
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Any update on this?
@doprdele any luck? I want to access a kerberized nfs from kubernetes pod with nfs volume plugin
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Bump. Also interested in this.
+1 need this support - access a kerberized NFS volume from a pod
+1 need this support - access a kerberized NFS volume from a pod
gssproxy need to get creditial user.keytab, so need to make sure that user_namespace enabled for this feature? Because different userid's keytab may be different for different KDC/AD?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
+1 need this support - access a kerberized NFS volume from a pod
+1 from me as well. Would be nice to store "credentials" + KDC nformation as secret and csi cares about getting a ticket and use this to mount the NFS share (and also cares about ticket renewal).
Maybe I'm thinking wrong and this does not make sense, but on normal linux servers kerberos is common practice to secure NFS.
+1 in enterprise environment this is necessary. Is there any other solution?
+1 need this support too. I just can't believe kubernetes doesn't support nfs4 with krb in any (obscure) way
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
+1 I would also very much like to use nfs4+kerberos with k8s.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Any feedback? More users are asking to use NFS with Kerberos in the last days.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Is this considered for implementation? I'd be happy to have this!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@smarterclayton from internal email discussion:
The original request received was to authenticate with Kerberos.
@kubernetes/rh-storage