kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
109.74k stars 39.3k forks source link

Allow NFS to accept a Secret for authentication #13136

Closed markturansky closed 3 weeks ago

markturansky commented 9 years ago

@smarterclayton from internal email discussion:

NFS Volumes should accept a secret that results in delivery of a keytab to the node. Admins should be able to make a keytab available for folks to add to pods directly.

The original request received was to authenticate with Kerberos.

@kubernetes/rh-storage

rootfs commented 9 years ago

I think this deserves a more generic discussion: how to feed volume plugin mounters more options. There are NFS mount time options (including security type, uid, gid, soft/hard, nfs version, fscache, etc) that we want to pass to the NFS mounter. Similarly, we want to pass options into Glusterfs mounter.

The Kerberos discussion could have implications on minion configuration. Minions have to join krb5 domain, and is thus not multi-tenancy friendly.

This could be potential use case for https://github.com/kubernetes/kubernetes/issues/13138. We containerize NFS mounter in a krb5-enabled container.

smarterclayton commented 9 years ago

Can we init and join multiple kerberos domains on the host in any way? For instance, joining as multiple UIDs in different sessions?

On Tue, Aug 25, 2015 at 11:16 AM, Huamin Chen notifications@github.com wrote:

I think this deserves a more generic discussion: how to feed volume plugin mounters more options. There are NFS mount time options (including security type, uid, gid, soft/hard, nfs version, fscache, etc) that we want to pass to the NFS mounter. Similarly, we want to pass options into Glusterfs mounter.

The Kerberos discussion could have implications on minion configuration. Minions have to join krb5 domain, and is thus not multi-tenancy friendly.

This could be potential use case for #13138 https://github.com/kubernetes/kubernetes/issues/13138. We containerize NFS mounter in a krb5-enabled container.

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134599933 .

Clayton Coleman | Lead Engineer, OpenShift

rootfs commented 9 years ago

good point, yes, it is possible configure multiple kerberos realms, still I am not sure how to avoid UID collision if same UIDs exist in different kerbors realms.

rootfs commented 9 years ago

This multi domain issue seems also raised in this nfsv4 draft:

Multi-domain capable sites need to meet the following requirements in
   order to ensure that NFSv4 clients and servers can map between
   name@domain and internal representations reliably.  While some of
   these constraints are basic assumptions in NFSv4.0 [RFC7530] and
   NFSv4.1 [RFC5661], they need to be clearly stated for the multi-
   domain case.

Adamson & Williams      Expires February 15, 2016               [Page 8]

Internet-Draft             Multi NFSv4 Domain                August 2015

   o  The NFSv4 domain portion of name@domain MUST be unique within the
      multi-domain namespace.  See [RFC5661] section 5.9 "Interpreting
      owner and owner_group" for a discussion on NFSv4 domain
      configuration.

   o  The name portion of name@domain MUST be unique within the
      specified NFSv4 domain.

   Due to UID and GID collisions, stringified UID/GIDs MUST NOT be used
   in a multi-domain deployment.  This means that multi-domain-capable
   servers MUST reject requests that use stringified UID/GIDs.
pmorie commented 9 years ago

@markturansky What does it mean to 'deliver a keytab to the node' ?

smarterclayton commented 9 years ago

Secret reference in the volume definition, just like Gluster and Ceph?

On Aug 25, 2015, at 1:16 PM, Paul Morie notifications@github.com wrote:

@markturansky https://github.com/markturansky What does it mean to 'deliver a keytab to the node' ?

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134656700 .

markturansky commented 9 years ago

I prefer injecting a secret into a volume, because that can also work with persistent volumes. Kubelet can look up the secret and inject it securely into the volume.

That's what I did in this PR: https://github.com/kubernetes/kubernetes/pull/7150 but that was crudely applied to all volumes. We can instead make an AuthenticatableVolumePlugin interface and implement as supported by plugins. A new interface follows the pattern in volumes where narrow interfaces do the work and plugins can implement them individually.

rootfs commented 9 years ago

Distributing and synchronizing keytab in a large scale environment is scary. I wouldn't be surprised if people prefer some sorts of key management like KDC/AD.

danielwinter83 commented 9 years ago

Hey, I have to set the credentials for the nfs share. I'm hosting kubernetes on Azure and I created a file share container storage that can be mounted as nfs. But to do this I have to set some additional parameters like username and password

smarterclayton commented 9 years ago

There are two models at play:

  1. Be able to use a specific NFS mount given all of the credentials necessary to authenticate.
  2. Have a fully implemented KDC/AD infrastructure.

This issue is about solving #1 - it should be possible to NFS mount without requiring additional host infrastructure (like kerberos) if you have the appropriate credentials to do so. This is a delegation of connection authority from the caller to the node that mounts the volume. The node in this case has no implicit authority to kinit as anyone.

A KDC/AD infrastructure will likely associate identity (carried by the pod) to a kinit on the node that runs the pod. However, the node will have to have authority to kinit as whatever user carries identity anyway, so that's just a different path from above.

Both are valid, different use cases. Daniel's use case is the former and something I consider relevant for cloud environments.

On Wed, Aug 26, 2015 at 4:36 AM, Daniel notifications@github.com wrote:

Hey, I have to set the credentials for the nfs share. I'm hosting kubernetes on Azure and I created a file share container storage that can be mounted as nfs. But to do this I have to set some additional parameters like username and password

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134878452 .

Clayton Coleman | Lead Engineer, OpenShift

smarterclayton commented 9 years ago

We'd need to support two types of secrets - user/pass, and key tab. The simplest possible is user/pass at this point.

On Wed, Aug 26, 2015 at 3:35 PM, Clayton Coleman ccoleman@redhat.com wrote:

There are two models at play:

  1. Be able to use a specific NFS mount given all of the credentials necessary to authenticate.
  2. Have a fully implemented KDC/AD infrastructure.

This issue is about solving #1 - it should be possible to NFS mount without requiring additional host infrastructure (like kerberos) if you have the appropriate credentials to do so. This is a delegation of connection authority from the caller to the node that mounts the volume. The node in this case has no implicit authority to kinit as anyone.

A KDC/AD infrastructure will likely associate identity (carried by the pod) to a kinit on the node that runs the pod. However, the node will have to have authority to kinit as whatever user carries identity anyway, so that's just a different path from above.

Both are valid, different use cases. Daniel's use case is the former and something I consider relevant for cloud environments.

On Wed, Aug 26, 2015 at 4:36 AM, Daniel notifications@github.com wrote:

Hey, I have to set the credentials for the nfs share. I'm hosting kubernetes on Azure and I created a file share container storage that can be mounted as nfs. But to do this I have to set some additional parameters like username and password

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/kubernetes/issues/13136#issuecomment-134878452 .

Clayton Coleman | Lead Engineer, OpenShift

Clayton Coleman | Lead Engineer, OpenShift

fiorix commented 8 years ago

Any update here? I'm also looking for a way to pass uid/gid to the nfs mounter.

pmorie commented 8 years ago

@florix Running the mount as a specific uid/gid should probably a separate issue, there's a fair amount of extra backage with those like policy on what uid/gids a user can specify.

doprdele commented 6 years ago

I'm confused a little bit by this discussion. Is it possible for Kubernetes to mount NFS volumes with krb5p?

Best, Evan

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

jd-daniels commented 6 years ago

Is there any kind of workaround for this anyone has found? In a multi-tenant cluster separated by namespace, this is an issue for mounting external NFS into pods on the cluster. Having all namespaces be able to mount each others external mounts is a security issue.

Thanks, John

george-angel commented 6 years ago

/remove-lifecycle stale

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

george-angel commented 6 years ago

/remove-lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

soukron commented 5 years ago

Bump this issue.

Any chance to get it done?

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

george-angel commented 5 years ago

/remove-lifecycle rotten

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

AndrewSav commented 5 years ago

/remove-lifecycle stale

iahmad-khan commented 5 years ago

Any update on this?

iahmad-khan commented 5 years ago

@doprdele any luck? I want to access a kerberized nfs from kubernetes pod with nfs volume plugin

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

george-angel commented 5 years ago

/remove-lifecycle stale

brightshine1111 commented 5 years ago

Bump. Also interested in this.

eshamay commented 4 years ago

+1 need this support - access a kerberized NFS volume from a pod

iahmad-khan commented 4 years ago

+1 need this support - access a kerberized NFS volume from a pod

wenlxie commented 4 years ago

gssproxy need to get creditial user.keytab, so need to make sure that user_namespace enabled for this feature? Because different userid's keytab may be different for different KDC/AD?

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

george-angel commented 4 years ago

/remove-lifecycle stale

dblock247 commented 4 years ago

+1 need this support - access a kerberized NFS volume from a pod

chrifey commented 4 years ago

+1 from me as well. Would be nice to store "credentials" + KDC nformation as secret and csi cares about getting a ticket and use this to mount the NFS share (and also cares about ticket renewal).

Maybe I'm thinking wrong and this does not make sense, but on normal linux servers kerberos is common practice to secure NFS.

capacman commented 4 years ago

+1 in enterprise environment this is necessary. Is there any other solution?

viktoriaas commented 4 years ago

+1 need this support too. I just can't believe kubernetes doesn't support nfs4 with krb in any (obscure) way

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

george-angel commented 3 years ago

/remove-lifecycle stale

PorkCharsui79 commented 3 years ago

+1 I would also very much like to use nfs4+kerberos with k8s.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

george-angel commented 3 years ago

/remove-lifecycle stale

soukron commented 3 years ago

Any feedback? More users are asking to use NFS with Kerberos in the last days.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

george-angel commented 3 years ago

/remove-lifecycle stale

justsomebody42 commented 3 years ago

Is this considered for implementation? I'd be happy to have this!

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

nick-oconnor commented 2 years ago

/remove-lifecycle stale