Closed worldofgeese closed 3 years ago
@worldofgeese : json requires that properties and values are in quotes, e.g. "kind":"Config" These were obviously lost when writing it to file.
But more important: Never ever provide your complete kubeconfig in a public repo! Please remove it immediately from the comment and delete the cluster.
@worldofgeese : json requires that properties and values are in quotes, e.g. "kind":"Config" These were obviously lost when writing it to file.
But more important: Never ever provide your complete kubeconfig in a public repo! Please remove it immediately from the comment.
I haven't. Please review the posted kubeconfig. Sensitive values have been replaced by "stringtoken" and "tokenvalue".
@worldofgeese : json requires that properties and values are in quotes, e.g. "kind":"Config" These were obviously lost when writing it to file. But more important: Never ever provide your complete kubeconfig in a public repo! Please remove it immediately from the comment.
I haven't. Please review the posted kubeconfig. Sensitive values have been replaced by "stringtoken" and "tokenvalue".
@worldofgeese : node name: famly-prod-s3hqot7kk3, correct? I guess you replaced them afterwards by editing, but any subscriber of this repo will get the initial mail
I did immediately after but I can see GitHub preserves the original comment so I've now deleted the comment.
Back to the request at hand, I'd appreciate knowing what the IONOS Cloud team thinks of my proposals? HashiCorp themselves recommends using local-exec
only as a last resort and I'm inclined to agree. It's flaky and is especially hard to manage in tooling around Terraform such as Terragrunt or Terraform Cloud.
Hello @worldofgeese ,
Due to security concerns related to storing the kube config in the state, we will not implement a resource specifically for that
nor will we include a kube_config
attribute in the ionoscloud_k8s_cluster
resource because the value would end up in the
state.
Our recommendation is to use the ionoscloud_k8s_cluster
data source which does not store the value of kube_config
which you can extract in your plan, like this:
data "ionoscloud_k8s_cluster" "k8s_cluster_example" {
name = "k8s-demo"
}
resource "null_resource" "getcfg" {
provisioner "local-exec" {
command = "echo \"${yamlencode(data.ionoscloud_k8s_cluster.k8s_cluster_example.kube_config)}\" > kubecfg.yaml"
}
}
Hi @mflorin,
It would be great if the data resource was usable but it's not usable. The result of echoing the data resource will output to file unusable json. Can you try my code with your suggested method of using the data resource? It won't work.
A file object is a bigger security issue in CI/CD pipelines than a passable data resource. I wrote one proposal in my original post:
provider "helm" {
kubernetes {
config_path = data.ionoscloud_k8s_cluster.this.kube_config
}
}
This is much safer than passing in a file object manipulated with a local-exec
provisioner to a pipeline but it doesn't work with the given error provided in my first post.
@worldofgeese - it cannot work since config_path
refers to a file system path. kube_config
from the datasource contains the actual kube_config ... Given that, I'm not sure what you're proposal is exactly.
@mflorin the actual kube_config produced is not correct. It cannot be used as it is produced. Can you try and run your examples on a test cluster? Then please try and pass the file to your choice of k9s, Lens or kubectl. They will report that the file is unusable.
Given that, I'm not sure what you're proposal is exactly.
Any form of authentication to the cluster using Terraform constructs that does not rely on what HashiCorp states are last resort methods. Mutable out-of-band files are extremely fragile and vulnerable to accidental git check-ins. Terraform also has no way of tracking their state. Their outputs cannot be captured in variables, indeed there is little you can do with them inside Terraform data structures.
Given this example:
provider "k8s" {
host = rke_cluster.rancher_cluster.api_server_url
client_certificate = rke_cluster.rancher_cluster.client_cert
client_key = rke_cluster.rancher_cluster.client_key
cluster_ca_certificate = rke_cluster.rancher_cluster.ca_crt
load_config_file = false
}
could data resources be created to pass in to providers like so?
provider "k8s" {
host = rke_cluster.rancher_cluster.api_server_url
client_certificate = data.ionoscloud_k8s_cluster.this.client_cert
client_key = data.ionoscloud_k8s_cluster.this.client_key
cluster_ca_certificate = data.ionoscloud_k8s_cluster.this.ca_crt
load_config_file = false
}
@mflorin the actual kube_config produced is not correct. It cannot be used as it is produced. Can you try and run your examples on a test cluster? Then please try and pass the file to your choice of k9s, Lens or kubectl. They will report that the file is unusable.
You're right, my example was wrong - it was missing an yamlencode
- I've edited the comment, please review it.
That said, the datasource works as expected. The actual kube_config is correct.
Given that, I'm not sure what you're proposal is exactly.
Any form of authentication to the cluster using Terraform constructs that does not rely on what HashiCorp states are last resort methods. Mutable out-of-band files are extremely fragile and vulnerable to accidental git check-ins. Terraform also has no way of tracking their state. Their outputs cannot be captured in variables, indeed there is little you can do with them inside Terraform data structures.
Given this example:
provider "k8s" { host = rke_cluster.rancher_cluster.api_server_url client_certificate = rke_cluster.rancher_cluster.client_cert client_key = rke_cluster.rancher_cluster.client_key cluster_ca_certificate = rke_cluster.rancher_cluster.ca_crt load_config_file = false }
could data resources be created to pass in to providers like so?
provider "k8s" { host = rke_cluster.rancher_cluster.api_server_url client_certificate = data.ionoscloud_k8s_cluster.this.client_cert client_key = data.ionoscloud_k8s_cluster.this.client_key cluster_ca_certificate = data.ionoscloud_k8s_cluster.this.ca_crt load_config_file = false }
Yes, we can add new attributes to the k8s_cluster
data source to reference certificate data.
@mflorin the actual kube_config produced is not correct. It cannot be used as it is produced. Can you try and run your examples on a test cluster? Then please try and pass the file to your choice of k9s, Lens or kubectl. They will report that the file is unusable.
You're right, my example was wrong - it was missing an
yamlencode
- I've edited the comment, please review it. That said, the datasource works as expected. The actual kube_config is correct.
You're right, your datasource does work as expected with the addition of yamlencode
. You're also right that passing the datasource into config_path
which expects a file will not work, that was an oversight on my part.
Yes, we can add new attributes to the k8s_cluster data source to reference certificate data.
Thank you very much! That's really great news :confetti_ball: :dancers:
@worldofgeese , would bearer tokens suffice? I'll have to double check but as far as I can see, Ionos Cloud k8s clusters are not configured with client certificate authentication, but with bearer tokens, which I think you could use via the token
attribute of the hashicorp kubernetes
resource you're trying to use.
@mflorin I've never made use of the token
attribute before but if you think it will work as a solution, sure!
Hello @worldofgeese,
We've just released v6.0.0-beta.3
which adds the functionality we discussed.
In short, you can use the user token like this:
resource "ionoscloud_k8s_cluster" "test" {
name = "test_cluster"
maintenance_window {
day_of_the_week = "Saturday"
time = "03:58:25Z"
}
}
data "ionoscloud_k8s_cluster" "test" {
name = "test_cluster"
}
provider "kubernetes" {
host = data.ionoscloud_k8s_cluster.test.server
token = data.ionoscloud_k8s_cluster.test.user_tokens["cluster-admin"]
}
We've added 4 new attributes:
For more information about what was added, please see the documentation at https://registry.terraform.io/providers/ionos-cloud/ionoscloud/latest/docs/data-sources/k8s_cluster#config .
Current Provider Version
Use-cases
I want to easily make use of the
helm_release
andbanzaicloud/k8s
resources without usinglocal-exec
to pull down a kubeconfig file. Local files are often untouchable in CI/CD pipelines and are otherwise non-deterministic, unreliable objects.Attempted Solutions
Here's what Famly uses now to get a usable kubeconfig:
Proposal
It could be as simple as:
However,
helm_release
will error with aerror loading config file: ... file name too long
.Here's an example of how to fetch a certificate for AWS EKS:
How the rancher provider offers it (in my opinion the most elegant):