Open everpeace opened 1 year ago
Thank you so much for illustrating the problem and context so clearly, and proposing potential solutions. We will take your proposals into consideration for the upcoming releases.
Just to clarify: Is the current design (only supporting Workload Identity) blocking your development on your in-house Kubernetes clusters? Or is the proposal just for avoiding the toil?
Thank you so much for illustrating the problem and context so clearly, and proposing potential solutions. We will take your proposals into consideration for the upcoming releases.
Thank you very much!
Just to clarify: Is the current design (only supporting Workload Identity) blocking your development on your in-house Kubernetes clusters? Or is the proposal just for avoiding the toil?
Actually, not a blocker currently because the number of cluster is not so many. But it could be a problem in the near future.
We're using Fleet Workload Identity. I now understand supporting Workload Identity Federation have priority.
Context/Scenario:
Thanks a lot for @everpeace proposing potential solutions. I think the first option is very similar to my scenario.
Is there any progress on this issue? @songjiaxun
p.s. ofek/csi-gcs looks like a good choice.
Hi @xieydd , we have made the decision to not support service account keys. Workload Identity is the recommended way to access Google Cloud services from within GKE. Workload Identity allows you to configure a Kubernetes service account to act as a Google service account, and avoid managing and protecting secrets manually. Please try to migrate to Workload Identity. Thank you!
Thanks for the update.
we have made the decision to not support service account keys. Workload Identity is the recommended way to access Google Cloud services from within GKE.
I think this is a reasonable decision in terms of security (long-lived key is dangerous, seldom rotated, hard to rotate safely, etc.). I can support this.
Option 2. Supporting Workload Identity Federation
Are there any plan for federated identity support other than workload identity (e.g. spiffee)?? Workload identity and Workload identity federation depends on very similar mechanism. So, I suppose there would be no security risk to support this.
Hi @xieydd , we have made the decision to not support service account keys. Workload Identity is the recommended way to access Google Cloud services from within GKE. Workload Identity allows you to configure a Kubernetes service account to act as a Google service account, and avoid managing and protecting secrets manually. Please try to migrate to Workload Identity. Thank you!您好,我们已决定不支持服务帐户密钥。 Workload Identity 是从 GKE 内访问 Google Cloud 服务的推荐方式。 Workload Identity 允许您将 Kubernetes 服务帐户配置为充当 Google 服务帐户,并避免手动管理和保护机密。请尝试迁移到 Workload Identity。谢谢你!
Thanks for your reply, I will find out Workload Identity
.
Hi @everpeace , I will spend some time doing my research on the federated identity, and will keep you updated.
Hello All,
It seems that the Workload Identity Federation is not supported by this CSI driver yet.
This is very unfortunate, since hence GCS CSI driver cannot run outside of Google Cloud, since it relies on the metadata service present on the nodes.
That in turn makes GCS CSI driver not available in GKE on VMware, GKE on Bare Metal and other GKE Enterprise favours, which customers would expect, since these are Google Cloud products.
A sample Workload Identity Federation support is implemented and is working well in Google Cloud Secret Manager CSI Driver
Is my understanding correct and there is no way of mounting GCS buckets into Kubernetes clusters running outside of Google Cloud (using this driver)?
As for now, unfortunately, we still don't have enough bandwidth to work on other auth method support. However, I've created a POC branch that supports GCP SA keys: https://github.com/GoogleCloudPlatform/gcs-fuse-csi-driver/commit/0d32b40a53a93792f6b2730227874cd5ae938510
Hi, Thank you very much for the great project! I'm really surprised that FUSE can run in the sidecar container without any privileges!
As kubernetes platform admin point of view, supporting FUSE was difficult(risky) because we have to give privilege to FUSE containers in application. But, this project proved it can breaks the limitation (thanks to "file descriptor passing" between CSI driver and FUSE sidecar which can encapsulate privileged operations in the CSI driver).
Context/Scenario
The Problem
Current,
gcs-fuse-csi-driver
implementation depends on Workload Identity.However, if I understood correctly, if the application runs in multiple kubernetes clusters, application developer has to create iam-policy-binding for each k8s cluster(k8s service account). It is because applications running on different cluster have different Workload Identities. That also means the application developer will need to update iam-policy-binding whenever our cluster is added/removed.
As a platform admin, the UX is not so convenient. I would like to reduce this toils on the application developer side.
Proposals
Option 1. Supporting GCP Service Account's Private Key in Kubernetes Secret
This would be handy. Of course, I understand Workload Identity is more secure than long lived(never expired) secret key file.
Our platform can provide a feature which syncs the secret across our clusters. In this case, application developers need nothing when the cluster which the application runs on is added/reduced. What the application developers need is only to specify the secret name in their manifest.
By the way,
gcsfuse
also acceptskey-file
as cli argument. But,gcs-fuse-csi-driver
explicitly prohibits to use the argument. Is there any reason for this??In this option, I imagined below changes:
secretName
)in volumeAttributes (also inMountConfig
/gcsfuse-tmp/.volumes/<volume-name>/service_account.json
?),MountConfig
(we need to add a field for this),gcsfuse
withkey-file=...
Option 2. Supporting Workload Identity Federation
This would be more secure and might be standard. Recently, there exists application identification mechanism which is not tied with single kubernetes cluster's authority (e.g. spiffee). By using this, application can have stable application identity even if the application runs on multiple kubernetes clusters.
I think this can completely fits with Workload Identity Federation use case.
In this option, I imagined below changes:
workloadIdentityProvider
serviceAccountEmail
gke-gcsfuse/credential-source-volume
gke-gcsfuse/credential-source-file
volumeMount
to sidecar container forgke-gcsfuse/credential-source-volume
MountConfig
, and pass to the sidecar/gcsfuse-tmp/.volumes/<volume-name>/credential_configuration.json
can be used?)gcsfuse
withkey-file=...
I would be very appreciated if I got feedbacks. Thanks in advance.