Open jaytmiller opened 4 years ago
Yeah, I've had to run
aws eks update-kubeconfig --name=<name> --region=<region> --profile=<profile>
after every cluster's creation. I have a feeling Terraform doesn't want to change your local kubeconfig
.
There are options for the terraform-aws-eks
module that can output a kubeconfig
file, but there would still be a step to switch to that file instead of ~/.kube/config
I think.
Yeah, might be useful to output a kubeconfig file locally, and then set the KUBECONFIG
env var to point to it? Terraform definitely doesn't wanna change your ~/.kube/config...
I think by default the terraform-aws-eks
module outputs a kubeconfig
file into the directory where you ran Terraform commands and names it kubeconfig_${var.cluster_name}
(see https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/kubectl.tf). Maybe there is a good way to automatically switch to the KUBECONFIG
env var to it?
We're wondering about maybe adding this to hubploy? (Maybe as an option which defaults to on?) How "expert" is the scenario where automatic kubeconfig is bad?
hubploy
doesn't depend on KUBECONFIG
- it gets credentials from the keys set in hubploy.yaml
. Current environment variables shouldn't matter, and I'd prefer to not have hubploy modify anything in ~/.kube/config.
If you can find a way to set KUBECONFIG
env var after terraform completes, that's not a bad idea. However, I think changing what KUBECONFIG
points to without the user explicitly asking for it is recipe for trouble, as you can then accidentally perform operations in the wrong cluster! So I'd generally recommend against it, nad instead document what people can do.
OK, sounds totally reasonable to me. We're in the process of documenting the end-to-end process internally so this is easy to add to those docs explicitly. If we ever do make a comprehensive wrapper script to automate installing all the pieces end-to-end, we could also add this step to that script.
I ran into this:
Error: stat /Users/jmiller/.kube/config: no such file or directory on autoscaler.tf line 63, in resource "helm_release" "cluster-autoscaler": 63: resource "helm_release" "cluster-autoscaler" {
and @yuvipanda suggested I write an issue about it.
The manual work-around was:
aws eks update-kubeconfig --name <CLUSTER-NAME>