Closed pweil- closed 2 years ago
Concretely, the syncer should detect an object that has a PodSpec (somehow, possibly Knative-style duck types, possibly just a hardcoded list of known types), detect whether it specifies a serviceAccountName
that references a SA with any ClusterRoleBindings, and inject the env var that tells the k8s client to talk to kcp's address.
Given this represents a pretty esoteric use of the multicluster system, I think we could consider punting this to prototype3. Prototype2 would then only be focused on workspaces, API evolution, and non-operator multi-cluster workloads.
the duck typing approach is fragile. Don't we know exactly which workload types the syncer supports? Why don't we add support for just those, i.e. having a list of json path into those type where the service account could be declared?
Don't we know exactly which workload types the syncer supports?
For now, we do (Deployment, DaemonSet, StatefulSet, that's about it I think?) -- but in the future we might have any arbitrary CRDs flowing down to pclusters to get translated into Pods etc down there, and will need the SA to be handled in our special way.
I'm fine hard-coding for now, or making it a flag, but we'll want to think about how to generalize it eventually.
Another possibility: require CRD authors to add some annotation to the type telling us where to find strings representing service accounts?
Which CRDs for "but in the future we might have any arbitrary CRDs flowing down to pclusters" do you have in mind? Some custom workloads?
Another possibility: require CRD authors to add some annotation to the type telling us where to find strings representing service accounts?
That sounds more feasible. We could start with those annotations. Eventually, if this become more serious and we have facets, this could be a facet onto the CRD type, e.g. SyncOptions.
Do we have a concrete example for a demo for prototype 2? i.e. an app that gets deployed to a pcluster that needs to interact w/the apiserver (which then needs to be kcp, not kube)?
Do we have a concrete example for a demo for prototype 2? i.e. an app that gets deployed to a pcluster that needs to interact w/the apiserver (which then needs to be kcp, not kube)?
If a Tekton controller (a Deployment with a SA with ClusterRoles) is scheduled to a pcluster, it needs to point back up to kcp to watch for new TaskRun CRs created by users, turn them into Pods and put them back in kcp (to be scheduled to some pcluster), watch those Pods and update TaskRuns, etc. (Tekton, Knative, ArgoCD, any number of similar app-layer controllers on CRs)
In an ideal case, the Tekton controller(s) would be running totally outside of kcp and not be scheduled to pclusters at all, but since we want to support ~arbitrary similar controllers for ~arbitrary CR types we need some solution to run those in general.
An alternative is to have kcp schedule the TaskRun CRs down to pclusters where a Tekton controller is already running, pointed at its local API server, creating and watching Pods only in the local cluster. Is that how we want to solve this case? 🤔
Great example. So basically any controller that works with its own CRDs is a candidate... I'm inclined to move this to prototype3 as you suggested above.
Research: make sure we can rewrite a Deployment with custom service account logic and not fight with Kube service account controllers / kubelet.
Using "automountServiceAccountToken: false" in deployment.spec.template.spec allows us to handle what volumes should be mounted on the PODs, this way we can override the default kube-root-ca.crt
or any other names
related to https://github.com/kcp-dev/kcp/issues/535
@jmprusi is this ready to be closed out for p3 work (p3 work merged, remaining items noted as out of scope, moved to other issues, etc)?
I'm working on cleaning some stuff, and adding some tests, but it works. Hopefully today we can merge the pr and close this issue.
Moving to 0.4 to re-assess/update/close/etc.
The code implementing this is already in the tree. I filed https://github.com/kcp-dev/kcp/issues/917 to track the test addition(s) required to verify the behavior and insure against regression.
We have to come back to this in v0.5.
Done in v0.5. Remaining task about bounded token moved into https://github.com/kcp-dev/contrib-tmc/issues/127.
Use Case: Deploy an unmodified application and make it transparently talk to the kcp apiserver
Objectives:
Details
Deploying an application to KCP should detect workloads that configure SAs and inject a kubeconfig that points them back up to kcp
Action Items:
~~- [ ] kcp-dev/contrib-tmc#127
kube-root-ca.crt
ConfigMap, and deconflict name (https://github.com/kcp-dev/kcp/pull/679 && https://github.com/kcp-dev/kcp/pull/680 )KUBERNETES_SERVICE_HOST
andKUBERNETES_SERVICE_PORT
to the PodSpec