Closed christopherhein closed 7 months ago
I agree. From modeling perspective, the syncer can be viewed as the "Kubelet" for all tenants since it honors the Pod lifecycle events for all tenants and provision the actual workload. It can "fork" the kubelet behavior to support SA token auto refresh.
During Pod creation, syncer can request a valid token, add the projection spec in a special annotation, and mutate the project volume to secret mount. Syncer can implement a patroller to scan all super pods with SA project enabled by checking the annotation and refreshing their secrets on behalf the pods. It is a nontrivial change but should work.
Hi @christopherhein , do you have any plan to support this feature? Just want to make sure this issue is still on track. :smile:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@christopherhein: Reopened this issue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@christopherhein: Reopened this issue.
Hi @christopherhein , do you have any plan to support this feature? Just want to make sure this issue is still on track. 😄
Hey @wondywang I have not been able to prioritize this feature on our end, is this something you are doing? It would be a great addition.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@christopherhein: Reopened this issue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen /remove-lifecycle rotten
@christopherhein: Reopened this issue.
Hey @wondywang I have not been able to prioritize this feature on our end, is this something you are doing? It would be a great addition.
Hi @christopherhein. Sorry for taking so long to reply to you. I have developed a feature based on Kubernetes 1.22 to be compatible with the kube-root-ca feature-gate. And kubernetes 1.22 has not completely discarded the ServiceAccount token, which will not be discarded until 1.24.
At the same time, I have developed a feature to be compatible with Kubernetes 1.24, which the ServiceAccount token no longer exists. Currently, this feature is still being verified (due to the current internal cluster, the 1.24 version cluster is lacking).
cc @Fei-Guo
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
User Story
As a user I would like to use ServiceAccount Projection from a VirtualCluster so that I can expose my cluster as an Identity provider and issue certs to tools like vault.
Detailed Description
Upstream Kubernetes added support for ServiceAccount projection but it requires that the kubelet make requests on behalf of a workload to grant it tokens with specific audiences and expirations this is difficult to do with shared kubelets in VC. It would be nice if we could figure a way to support this.
Docs: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection
Anything else you would like to add:
You can make these with a TokenRequest definition and using curl against the token subresource for ServiceAccounts like so:
the Body of this usually accepts this object - https://github.com/kubernetes/api/blob/v0.20.1/authentication/v1/types.go#L131
For example:
Ideas
My initial thought process is adding a step in the pod syncer, where it currently checks for ServiceAccount tokens being created before creating the pod that it would call out to a "projected serviceaccount syncer" to make this request against the tenant control plane then stores the contents in a secret only in the super cluster then we mutate the pod spec to change from projected to a secret source and mount like a normal secret. We'd then need something that was checking validity of these tokens, the "projected serviceaccount syncer" for example, and when it got close to expiring we redo the call and update the token causing the secret mount to update as well.
Alternatively, we could modify the Kubelet to be "tenant aware" but I imagine this would become a massive effort.
/kind feature