Closed stephenashank closed 5 years ago
Came across this blog post which outlines an approach very similar to the one we'd like to take: https://codeascraft.com/2018/06/05/deploying-to-google-kubernetes-engine/
After doing some investigation here are the steps I've mapped out for accomplishing this:
Modify the KubeConfig class so that instead of retrieving client certs from the cluster's master auth and embedding them in the generated KubeConfig file, rather it uses the Google OAuth to acquire an access token and expiry for the GCP SA and embeds that into KubeConfig's->User->AuthProvider. For reference see gcloud's auth provider impl: https://github.com/google-cloud-sdk/google-cloud-sdk/blob/68b374bc5fe679d1f9a665451bd39fa1ed735581/lib/googlecloudsdk/api_lib/container/kubeconfig.py#L226
Create and document a helm chart which can be applied to a target cluster to grant the GCP SA the necessary cluster roles to enable deployment.
Another note, Etsy's deployinator project will be a good point of reference for this work: https://github.com/etsy/deployinator
Currently the plugin relies on legacy auth with client certs which requires users to give their service account the container.admin role. Ideally we should be using access tokens for auth which would remove this pre-req for the service account.
GKE supports auth using a GCP service account. This would provide an easier means of accomplishing this. Need to investigate how this works with KubeConfig while we still have the dep on the kubectl binary.