Open krichter722 opened 4 years ago
I just ran into this as well. For now, the best user-space solution seems to be running gcloud
itself to generate the kubeconfig
file, and then depend upon that role for the subsequent cluster operations.
Was this ever fixed? Because I got here looking for a solution to the exact same problem. And installing and running gcloud just to get the cluster credentials is... clunky (to say the least).
SUMMARY
Specifying the parameter
kubectl_path
ofgcp_container_cluster
seems to cause the master auth credentials to be written to the kubeconfig file created at the specified path (judging from the exception below). This isgcloud container clusters create
creates a kubeconfig without a state-of-the-art token-based authenticationISSUE TYPE
COMPONENT NAME
gcp_container_cluster
ANSIBLE VERSION
and
CONFIGURATION
OS / ENVIRONMENT
Docker image
centos/centos7
STEPS TO REPRODUCE
Build
with
docker build -t dev .
and rundocker run -v "$(pwd):/mnt" -it dev sh -c 'cd /mnt; ansible-config dump --only-changed; ansible-playbook --ask-vault-pass install_gke_cluster.yml -vvv'
with playbookand task
roles/k8s.cluster.gke/tasks/main.yml
:put values in
roles/k8s.cluster.gke/defaults/main.yml
as I can't give you my GKE credentials :)EXPECTED RESULTS
kubeconfig
to be created in/tmp/1
(whatgcloud container clusters create
creates in.kube/config
). The kubeconfig should contain token-based credentials as that is whatgcloud
creates.If you tackle the feature request aspect and provide token-based authentication, then you still might want to support username/password authentication for backwards compatibility. Therefore the failure below should be handle more gracefully, at least with an intuitive error message that doesn't require to look into the source code.
Both the current requirement for master credentials and how to get a token-based kubeconfig (as soon as implemented) should be documented.
ACTUAL RESULTS
The command fails due to