ionos-cloud / terraform-provider-ionoscloud

The IonosCloud Terraform provider gives the ability to deploy and configure resources using the IonosCloud APIs.
Mozilla Public License 2.0
35 stars 24 forks source link

Native resource for fetching kubeconfig #66

Closed worldofgeese closed 3 years ago

worldofgeese commented 3 years ago

Current Provider Version

6.0.0-beta.2

Use-cases

I want to easily make use of the helm_release and banzaicloud/k8s resources without using local-exec to pull down a kubeconfig file. Local files are often untouchable in CI/CD pipelines and are otherwise non-deterministic, unreliable objects.

Attempted Solutions

Here's what Famly uses now to get a usable kubeconfig:

data "ionoscloud_k8s_cluster" "yb1" {
  id = "${ionoscloud_k8s_cluster.yb1.id}"
}
resource "null_resource" "getcfg" {
  provisioner "local-exec" {
    command = <<EOT
      curl --include \
     --request GET \
     --user "${var.username}:${var.secret}" \
     https://api.ionos.com/cloudapi/v6/k8s/${ionoscloud_k8s_cluster.yb1.id}/kubeconfig | sed -n '17,34p' > kubeconfig.yaml && sleep 30
 EOT
  }
  depends_on = [ionoscloud_k8s_cluster.yb1]
  provisioner "local-exec" {
    when    = destroy
    command = <<EOT
       rm kubeconfig.yaml 
  EOT
  }
}
provider "helm" {
  kubernetes {
    config_path = "./kubeconfig.yaml"
  }
} 
resource "helm_release" "ingress-nginx" {
  repository       = "https://kubernetes.github.io/ingress-nginx"
  name             = "ingress-nginx"
  chart            = "ingress-nginx"
  namespace        = "ingress-nginx"
  create_namespace = true
}

Proposal

It could be as simple as:

provider "helm" {
  kubernetes {
    config_path = data.ionoscloud_k8s_cluster.this.kube_config
  }
}

However, helm_release will error with a error loading config file: ... file name too long.

Here's an example of how to fetch a certificate for AWS EKS:

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
      command     = "aws"
    }
  }
}

How the rancher provider offers it (in my opinion the most elegant):

provider "k8s" {
  host = rke_cluster.rancher_cluster.api_server_url

  client_certificate     = rke_cluster.rancher_cluster.client_cert
  client_key             = rke_cluster.rancher_cluster.client_key
  cluster_ca_certificate = rke_cluster.rancher_cluster.ca_crt

  load_config_file = false
}
jbuchhammer commented 3 years ago

@worldofgeese : json requires that properties and values are in quotes, e.g. "kind":"Config" These were obviously lost when writing it to file.

But more important: Never ever provide your complete kubeconfig in a public repo! Please remove it immediately from the comment and delete the cluster.

worldofgeese commented 3 years ago

@worldofgeese : json requires that properties and values are in quotes, e.g. "kind":"Config" These were obviously lost when writing it to file.

But more important: Never ever provide your complete kubeconfig in a public repo! Please remove it immediately from the comment.

I haven't. Please review the posted kubeconfig. Sensitive values have been replaced by "stringtoken" and "tokenvalue".

jbuchhammer commented 3 years ago

@worldofgeese : json requires that properties and values are in quotes, e.g. "kind":"Config" These were obviously lost when writing it to file. But more important: Never ever provide your complete kubeconfig in a public repo! Please remove it immediately from the comment.

I haven't. Please review the posted kubeconfig. Sensitive values have been replaced by "stringtoken" and "tokenvalue".

@worldofgeese : node name: famly-prod-s3hqot7kk3, correct? I guess you replaced them afterwards by editing, but any subscriber of this repo will get the initial mail

worldofgeese commented 3 years ago

I did immediately after but I can see GitHub preserves the original comment so I've now deleted the comment.

worldofgeese commented 3 years ago

Back to the request at hand, I'd appreciate knowing what the IONOS Cloud team thinks of my proposals? HashiCorp themselves recommends using local-exec only as a last resort and I'm inclined to agree. It's flaky and is especially hard to manage in tooling around Terraform such as Terragrunt or Terraform Cloud.

mflorin commented 3 years ago

Hello @worldofgeese ,

Due to security concerns related to storing the kube config in the state, we will not implement a resource specifically for that nor will we include a kube_config attribute in the ionoscloud_k8s_cluster resource because the value would end up in the state.

Our recommendation is to use the ionoscloud_k8s_cluster data source which does not store the value of kube_config which you can extract in your plan, like this:

data "ionoscloud_k8s_cluster" "k8s_cluster_example" {
  name     = "k8s-demo"
}

resource "null_resource" "getcfg" {
  provisioner "local-exec" {
    command = "echo \"${yamlencode(data.ionoscloud_k8s_cluster.k8s_cluster_example.kube_config)}\" > kubecfg.yaml"
  }
}
worldofgeese commented 3 years ago

Hi @mflorin,

It would be great if the data resource was usable but it's not usable. The result of echoing the data resource will output to file unusable json. Can you try my code with your suggested method of using the data resource? It won't work.

A file object is a bigger security issue in CI/CD pipelines than a passable data resource. I wrote one proposal in my original post:

provider "helm" {
  kubernetes {
    config_path = data.ionoscloud_k8s_cluster.this.kube_config
  }
}

This is much safer than passing in a file object manipulated with a local-exec provisioner to a pipeline but it doesn't work with the given error provided in my first post.

mflorin commented 3 years ago

@worldofgeese - it cannot work since config_path refers to a file system path. kube_config from the datasource contains the actual kube_config ... Given that, I'm not sure what you're proposal is exactly.

worldofgeese commented 3 years ago

@mflorin the actual kube_config produced is not correct. It cannot be used as it is produced. Can you try and run your examples on a test cluster? Then please try and pass the file to your choice of k9s, Lens or kubectl. They will report that the file is unusable.

worldofgeese commented 3 years ago

Given that, I'm not sure what you're proposal is exactly.

Any form of authentication to the cluster using Terraform constructs that does not rely on what HashiCorp states are last resort methods. Mutable out-of-band files are extremely fragile and vulnerable to accidental git check-ins. Terraform also has no way of tracking their state. Their outputs cannot be captured in variables, indeed there is little you can do with them inside Terraform data structures.

Given this example:

provider "k8s" {
  host = rke_cluster.rancher_cluster.api_server_url

  client_certificate     = rke_cluster.rancher_cluster.client_cert
  client_key             = rke_cluster.rancher_cluster.client_key
  cluster_ca_certificate = rke_cluster.rancher_cluster.ca_crt

  load_config_file = false
}

could data resources be created to pass in to providers like so?

provider "k8s" {
  host = rke_cluster.rancher_cluster.api_server_url

  client_certificate     = data.ionoscloud_k8s_cluster.this.client_cert
  client_key             = data.ionoscloud_k8s_cluster.this.client_key
  cluster_ca_certificate = data.ionoscloud_k8s_cluster.this.ca_crt

  load_config_file = false
}
mflorin commented 3 years ago

@mflorin the actual kube_config produced is not correct. It cannot be used as it is produced. Can you try and run your examples on a test cluster? Then please try and pass the file to your choice of k9s, Lens or kubectl. They will report that the file is unusable.

You're right, my example was wrong - it was missing an yamlencode - I've edited the comment, please review it. That said, the datasource works as expected. The actual kube_config is correct.

mflorin commented 3 years ago

Given that, I'm not sure what you're proposal is exactly.

Any form of authentication to the cluster using Terraform constructs that does not rely on what HashiCorp states are last resort methods. Mutable out-of-band files are extremely fragile and vulnerable to accidental git check-ins. Terraform also has no way of tracking their state. Their outputs cannot be captured in variables, indeed there is little you can do with them inside Terraform data structures.

Given this example:

provider "k8s" {
  host = rke_cluster.rancher_cluster.api_server_url

  client_certificate     = rke_cluster.rancher_cluster.client_cert
  client_key             = rke_cluster.rancher_cluster.client_key
  cluster_ca_certificate = rke_cluster.rancher_cluster.ca_crt

  load_config_file = false
}

could data resources be created to pass in to providers like so?

provider "k8s" {
  host = rke_cluster.rancher_cluster.api_server_url

  client_certificate     = data.ionoscloud_k8s_cluster.this.client_cert
  client_key             = data.ionoscloud_k8s_cluster.this.client_key
  cluster_ca_certificate = data.ionoscloud_k8s_cluster.this.ca_crt

  load_config_file = false
}

Yes, we can add new attributes to the k8s_cluster data source to reference certificate data.

worldofgeese commented 3 years ago

@mflorin the actual kube_config produced is not correct. It cannot be used as it is produced. Can you try and run your examples on a test cluster? Then please try and pass the file to your choice of k9s, Lens or kubectl. They will report that the file is unusable.

You're right, my example was wrong - it was missing an yamlencode - I've edited the comment, please review it. That said, the datasource works as expected. The actual kube_config is correct.

You're right, your datasource does work as expected with the addition of yamlencode. You're also right that passing the datasource into config_path which expects a file will not work, that was an oversight on my part.

worldofgeese commented 3 years ago

Yes, we can add new attributes to the k8s_cluster data source to reference certificate data.

Thank you very much! That's really great news :confetti_ball: :dancers:

mflorin commented 3 years ago

@worldofgeese , would bearer tokens suffice? I'll have to double check but as far as I can see, Ionos Cloud k8s clusters are not configured with client certificate authentication, but with bearer tokens, which I think you could use via the token attribute of the hashicorp kubernetes resource you're trying to use.

worldofgeese commented 3 years ago

@mflorin I've never made use of the token attribute before but if you think it will work as a solution, sure!

mflorin commented 3 years ago

Hello @worldofgeese,

We've just released v6.0.0-beta.3 which adds the functionality we discussed. In short, you can use the user token like this:

resource "ionoscloud_k8s_cluster" "test" {
  name = "test_cluster"
  maintenance_window {
    day_of_the_week = "Saturday"
    time            = "03:58:25Z"
  }
}

data "ionoscloud_k8s_cluster" "test" {
  name = "test_cluster"
}

provider "kubernetes" {
  host = data.ionoscloud_k8s_cluster.test.server
  token =  data.ionoscloud_k8s_cluster.test.user_tokens["cluster-admin"]
}

We've added 4 new attributes:

For more information about what was added, please see the documentation at https://registry.terraform.io/providers/ionos-cloud/ionoscloud/latest/docs/data-sources/k8s_cluster#config .