Closed lego963 closed 2 years ago
Hi @lego963 ,
Thank you very much for your pull request. I keep an eye on provider updates on your side, and did notice the new data source for kubeconfig a while ago. However, I decided to not use it as we use custom context names for our clusters.
As we have people working on different clusters across multiple projects at a given time, being able to identify the active cluster by setting the context name to the CCE cluster name (which we generally prefix with project context and the stage) is crucial for us. In fact, we even use it as a safety measure on some scripts to check the context name to ensure they are not executed accidentally on the wrong cluster. Therefore, it was simpler to generate the configuration manually with context name overrides as opposed to modifying the data source's returned JSON.
The biggest drawback of this approach, is that it is not possible to limit the validity time of the kubeconfig as you could with the data source. Nonetheless, this has never been a requirement so far, and can still be achieved with the current module structure via the cluster_id
output:
data "opentelekomcloud_cce_cluster_kubeconfig_v3" "kubeconfig" {
cluster_id = module.cce.cluster_id
duration = var.kubeconfig_valid_days
}
I believe, however, adding the JSON formatted output and perhaps the additional clusters/contexts for TLS and internal can be beneficial. I will add them to the locals based config generation with custom names and change the default context to use the TLS verified access.
Please let me know if I am missing a benefit of using the data source and/or a more elegant solution to the custom context name requirement.
Best, Can.
@canaykin hiho, thanks for the review. I didn't know anything about the infra you use :) For now, I will close the PR. Thanks again.
Description
Use
data_source/opentelekomcloud_cce_cluster_kubeconfig_v3
instead of building kubeconfig file withlocals