Today, we encode the Kubernetes cluster that you wish to connect to directly into the X509 certificate. The request to the proxy specifies a generic SNI for Kubernetes Access, and then the Kubernetes element of the Proxy uses this encoded Kubernetes cluster to select the correct Kubernetes cluster to forward to. This necessitates the signing of a certificate for each Kubernetes cluster you wish to connect to.
Requiring the user/machine is issued a certificate per k8s cluster is problematic for a few reasons:
Performance. If the user has a large number of clusters, this requires many back-n-forth RPCs with the Auth Server to produce these certificates. This is potentially quite slow and places additional pressure on the auth server.
Handling ephemeral clusters. If a new cluster comes online, a new certificate must be signed.
Per-session MFA. The user must provide a MFA for the signing of each certificate. If rolling out a change to a number of clusters, this is fairly burdensome to provide. A single certificate that uses a system similar to the recent changes for per-session MFA for SSH access would only require a single MFA interaction.
Solution: embed the Kubernetes cluster in the SNI
Currently, we use a static SNI for all Kubernetes access - kube-teleport-proxy-alpn.ADDR. We could modify this to include the name of the cluster you wish to connect to.
Problems:
Could run into issues with maximum length of SNI
Many Kubernetes API client libraries have spotty support for providing an SNI
Solution: embed the Kubernetes cluster in the path of the request URL
Another idea is to put the name of the Kubernetes cluster into path of the requests made to the Teleport Proxy. The Teleport Proxy, which terminates the TLS, can read this path segment and then strip it out before forwarding it onward to the Kubernetes agent.
This is supported by kubectl and by the few other clients I have looked at (argocd etc), e.g:
We would need to determine a migration strategy - or if we'd want to keep around the old style permanently. An example migration strategy could look like:
Version X: Introduce support for handling k8s requests with the target cluster in the path. Allow the new style of kubeconfig to be opted into where the user knows that the Agent/Proxy version is at version X.
Version X+1: By default, start generating kubeconfigs that use this new style.
Another alternative to go forward with this change is to evaluate if all registered proxies support this new process during tsh kube login process and generate the correct kubeconfig
Today, we encode the Kubernetes cluster that you wish to connect to directly into the X509 certificate. The request to the proxy specifies a generic SNI for Kubernetes Access, and then the Kubernetes element of the Proxy uses this encoded Kubernetes cluster to select the correct Kubernetes cluster to forward to. This necessitates the signing of a certificate for each Kubernetes cluster you wish to connect to.
Requiring the user/machine is issued a certificate per k8s cluster is problematic for a few reasons:
Solution: embed the Kubernetes cluster in the SNI
Currently, we use a static SNI for all Kubernetes access -
kube-teleport-proxy-alpn.ADDR
. We could modify this to include the name of the cluster you wish to connect to.Problems:
Solution: embed the Kubernetes cluster in the path of the request URL
Another idea is to put the name of the Kubernetes cluster into path of the requests made to the Teleport Proxy. The Teleport Proxy, which terminates the TLS, can read this path segment and then strip it out before forwarding it onward to the Kubernetes agent.
This is supported by
kubectl
and by the few other clients I have looked at (argocd etc), e.g:@tigrato produced a short patch to demonstrate some potential changes for this: https://github.com/gravitational/teleport/commit/47b55a8825e95db89ea7e5a4814446c95de41121
We would need to determine a migration strategy - or if we'd want to keep around the old style permanently. An example migration strategy could look like: