kyma-incubator / terraform-provider-kind

Terraform Provider for kind (Kubernetes IN Docker)
Apache License 2.0
80 stars 44 forks source link

Ability to prevent the kind config file from being generated after applying a cluster #49

Closed nickjj closed 3 years ago

nickjj commented 3 years ago

Hi,

I noticed after applying everything I'll get a $clustername-config file generated in the same directory as where I ran Terraform. I'm curious if there's an option or a way to not have this config file generated. I didn't see this option documented anywhere.

tehcyx commented 3 years ago

Hi @nickjj ,

That's actually the kubeconfig file that will allow you using the cluster. There is a way to write it somewhere else if you want that: https://github.com/kyma-incubator/terraform-provider-kind/pull/35

Which was requested a while back: https://github.com/kyma-incubator/terraform-provider-kind/issues/34

I believe the kubeconfig is a important part of creating a cluster, why would you want to disable that?

nickjj commented 3 years ago

Ah, that was a detail that eksctl hid from me. It seems to write that to ~/.kube/config and honestly I never knew it existed. After using this provider I saw the config file in the root of my project and wasn't sure what it was.

Thanks for the clarification. Is it not standard to keep it in ~/.kube/config btw?

tehcyx commented 3 years ago

Usually you would leave the ~/.kube/config alone from an automation perspective. If you choose to merge that yourself, that is up to you. I think there have been some relevant discussions around this on the kind repository or kubernetes slack already.

Personally I'd also say, not introducing something that could break your ~/.kube/config file and make you loose access to the clusters that were already in there.

nickjj commented 3 years ago

Thanks. Maybe kind's CLI did that merging on your behalf, it's not something I ever thought about. Would it be worth calling this out in the docs? For example, what it is and whether or not it should be commit to version control. I'm guessing it should be ignored if it contains information about access control to the cluster?

tehcyx commented 3 years ago

I believe kind itself does something with your current context, in general kubectl is very aware of the $KUBECONFIG env variable. Indeed it could be added to the docs, and definitely don't ever commit any kubeconfig to version control.

nickjj commented 3 years ago

Thanks.

Here's something I noticed with this configuration. If I have this provider configuration:

locals {
  k8s_config_path = "~/.kube/config"
}

provider "kubernetes" {
  config_path = local.k8s_config_path
}

provider "helm" {
  kubernetes {
    config_path = local.k8s_config_path
  }
}

Then I used your kind_cluster provider without supplying kubeconfig_path.

It will create a tfexample-config kubeconfig file in the directory where I ran terraform apply however if I go-to my ~/.kube/config file there is a new entry for the kind_cluster that was created. If I terraform destroy things the ~/.kube/config file still has an entry for the deleted cluster name and the config file in the terraform directory still exists.

Is that expected behavior?

tehcyx commented 3 years ago

Config files are completely handled by kind, the provider is just passing path to kind and kind will handle things, see changes relevant to that in #35

If you check the tfexample-config after terraform destroy you'll see it doesn't hold any cluster information anymore but is just a small leftover yaml structure. Personally I think that kind doesn't want to deal with any logic to check if there's more than one cluster left in that file or delete files from your filesystem, so they just remove the cluster in the file, but not the file itself.

nickjj commented 3 years ago

I don't know if this is related but if I set kubeconfig_path = "~/.kube/config" within kind_cluster to match the behavior of what the kind CLI did by default then I always get The connection to the server 127.0.0.1:37649 was refused - did you specify the right host or port? when trying to access the cluster with both kubectl and the helm_release resource within Terraform.

The relevant bits of my config look like this:

locals {
  k8s_config_path = "~/.kube/config"
}

provider "kind" {
}

provider "kubernetes" {
  config_path = local.k8s_config_path
}

provider "helm" {
  kubernetes {
    config_path = local.k8s_config_path
  }
}

resource "kind_cluster" "default" {
  name = "tfclusterexample"
  kubeconfig_path = local.k8s_config_path
  wait_for_ready = true

  # ...
}

The only time I can get this provider to work is when I don't customize kubeconfig_path and use the default value but then I end up with the config file in the same directory as my Terraform file which I'm trying to avoid.

tehcyx commented 3 years ago

Have you tried a different path than ~/.kube/config ? I'm wondering if it's trying to avoid the standard kubeconfig file.

The error The connection to the server 127.0.0.1:37649 was refused - did you specify the right host or port? are you getting this from kubectl CLI or kind CLI? Can you check that in either case the kind cluster that you created via terraform is in the config file that $KUBECONFIG is pointing to the moment you run the command? If $KUBECONFIG is unset, the cluster should be present in ~/.kube/config.

Edit: Another idea is to check that the cluster is actually running via docker ps, which it should since you specified wait_for_ready = true

nickjj commented 3 years ago

That error was with kubectl.

The cluster is running based on at least the kindest/node containers are running.

echo $KUBECONFIG is unset.

This provider wrote the file ~/.kube/config but it took ~ as a literal relative directory name instead of being processed as my home directory. The literal ~/.kube/config did have the cluster info. The "real" ~/.kube/config file does not.

tehcyx commented 3 years ago

That is some weird behavior. Thanks for checking that out.

nickjj commented 3 years ago

No problem. If I replace ~/.kube/config and use /home/nick/.kube/config for the kubeconfig_path then everything works as expected. There's no config file in the terraform apply directory and the entry is saved to ~/.kube/config along with it being set as the current context and then running kubectl get all -A works. Running a terraform destroy also removes the entry from ~/.kube/config correctly.

Looks like maybe a bug with it not evaluating ~?

nickjj commented 3 years ago

Seems to be expected behavior after learning a bit more about Terraform.

If I use this then it works:

locals {
  k8s_config_path = pathexpand("~/.kube/config")
}

This is a built-in function by Terraform: https://www.terraform.io/docs/language/functions/pathexpand.html

So now the question becomes whether or not you want to change the default config location to be pathexpand("~/.kube/config") to match what the kind CLI does. I'll leave that one up to you but I think this issue is resolved in the sense that it's possible to make this provider act like the kind CLI by setting kubeconfig_path = pathexpand("~/.kube/config").

Thanks!

tehcyx commented 3 years ago

Thanks @nickjj for digging deeper. If I understand correctly, this only appears if the user explicitly sets the path to anything containing ~, since we don't set the path in the provider. That would boil down to another documentation change from my point of view.